00:00:00.001 Started by upstream project "autotest-per-patch" build number 132838 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.160 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.161 The recommended git tool is: git 00:00:00.161 using credential 00000000-0000-0000-0000-000000000002 00:00:00.163 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.198 Fetching changes from the remote Git repository 00:00:00.200 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.225 Using shallow fetch with depth 1 00:00:00.225 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.225 > git --version # timeout=10 00:00:00.250 > git --version # 'git version 2.39.2' 00:00:00.250 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.182 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.196 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.208 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.208 > git config core.sparsecheckout # timeout=10 00:00:07.218 > git read-tree -mu HEAD # timeout=10 00:00:07.233 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.250 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.250 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.376 [Pipeline] Start of Pipeline 00:00:07.388 [Pipeline] library 00:00:07.389 Loading library shm_lib@master 00:00:07.390 Library shm_lib@master is cached. Copying from home. 00:00:07.404 [Pipeline] node 00:00:07.417 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.419 [Pipeline] { 00:00:07.425 [Pipeline] catchError 00:00:07.426 [Pipeline] { 00:00:07.437 [Pipeline] wrap 00:00:07.446 [Pipeline] { 00:00:07.452 [Pipeline] stage 00:00:07.453 [Pipeline] { (Prologue) 00:00:07.466 [Pipeline] echo 00:00:07.467 Node: VM-host-SM9 00:00:07.473 [Pipeline] cleanWs 00:00:07.481 [WS-CLEANUP] Deleting project workspace... 00:00:07.482 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.488 [WS-CLEANUP] done 00:00:07.690 [Pipeline] setCustomBuildProperty 00:00:07.803 [Pipeline] httpRequest 00:00:10.827 [Pipeline] echo 00:00:10.827 Sorcerer 10.211.164.101 is dead 00:00:10.835 [Pipeline] httpRequest 00:00:13.698 [Pipeline] echo 00:00:13.700 Sorcerer 10.211.164.101 is alive 00:00:13.709 [Pipeline] retry 00:00:13.711 [Pipeline] { 00:00:13.724 [Pipeline] httpRequest 00:00:13.728 HttpMethod: GET 00:00:13.729 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.730 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.754 Response Code: HTTP/1.1 200 OK 00:00:13.755 Success: Status code 200 is in the accepted range: 200,404 00:00:13.755 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.671 [Pipeline] } 00:00:15.685 [Pipeline] // retry 00:00:15.692 [Pipeline] sh 00:00:15.970 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.985 [Pipeline] httpRequest 00:00:19.004 [Pipeline] echo 00:00:19.006 Sorcerer 10.211.164.101 is dead 00:00:19.016 [Pipeline] httpRequest 00:00:19.314 [Pipeline] echo 00:00:19.316 Sorcerer 10.211.164.20 is alive 00:00:19.325 [Pipeline] retry 00:00:19.327 [Pipeline] { 00:00:19.341 [Pipeline] httpRequest 00:00:19.346 HttpMethod: GET 00:00:19.346 URL: http://10.211.164.101/packages/spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz 00:00:19.347 Sending request to url: http://10.211.164.101/packages/spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz 00:00:19.355 Response Code: HTTP/1.1 200 OK 00:00:19.355 Success: Status code 200 is in the accepted range: 200,404 00:00:19.356 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz 00:00:57.205 [Pipeline] } 00:00:57.223 [Pipeline] // retry 00:00:57.231 [Pipeline] sh 00:00:57.513 + tar --no-same-owner -xf spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz 00:01:00.814 [Pipeline] sh 00:01:01.092 + git -C spdk log --oneline -n5 00:01:01.092 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:01.092 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:01.092 7219bd1a7 thread: use extended version of fd group add 00:01:01.092 1a5bdab32 event: use extended version of fd group add 00:01:01.092 92d1e663a bdev/nvme: Fix depopulating a namespace twice 00:01:01.110 [Pipeline] writeFile 00:01:01.124 [Pipeline] sh 00:01:01.405 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:01.416 [Pipeline] sh 00:01:01.695 + cat autorun-spdk.conf 00:01:01.695 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.695 SPDK_TEST_NVMF=1 00:01:01.695 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.695 SPDK_TEST_URING=1 00:01:01.695 SPDK_TEST_USDT=1 00:01:01.695 SPDK_RUN_UBSAN=1 00:01:01.695 NET_TYPE=virt 00:01:01.695 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.703 RUN_NIGHTLY=0 00:01:01.705 [Pipeline] } 00:01:01.718 [Pipeline] // stage 00:01:01.731 [Pipeline] stage 00:01:01.733 [Pipeline] { (Run VM) 00:01:01.746 [Pipeline] sh 00:01:02.026 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:02.026 + echo 'Start stage prepare_nvme.sh' 00:01:02.026 Start stage prepare_nvme.sh 00:01:02.026 + [[ -n 0 ]] 00:01:02.026 + disk_prefix=ex0 00:01:02.026 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:02.026 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:02.026 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:02.026 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.026 ++ SPDK_TEST_NVMF=1 00:01:02.026 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.026 ++ SPDK_TEST_URING=1 00:01:02.026 ++ SPDK_TEST_USDT=1 00:01:02.026 ++ SPDK_RUN_UBSAN=1 00:01:02.026 ++ NET_TYPE=virt 00:01:02.026 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.026 ++ RUN_NIGHTLY=0 00:01:02.026 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:02.026 + nvme_files=() 00:01:02.026 + declare -A nvme_files 00:01:02.026 + backend_dir=/var/lib/libvirt/images/backends 00:01:02.026 + nvme_files['nvme.img']=5G 00:01:02.026 + nvme_files['nvme-cmb.img']=5G 00:01:02.026 + nvme_files['nvme-multi0.img']=4G 00:01:02.026 + nvme_files['nvme-multi1.img']=4G 00:01:02.026 + nvme_files['nvme-multi2.img']=4G 00:01:02.026 + nvme_files['nvme-openstack.img']=8G 00:01:02.026 + nvme_files['nvme-zns.img']=5G 00:01:02.026 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:02.026 + (( SPDK_TEST_FTL == 1 )) 00:01:02.026 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:02.026 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:02.026 + for nvme in "${!nvme_files[@]}" 00:01:02.026 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:02.026 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.026 + for nvme in "${!nvme_files[@]}" 00:01:02.026 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:02.026 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.026 + for nvme in "${!nvme_files[@]}" 00:01:02.026 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:02.026 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:02.026 + for nvme in "${!nvme_files[@]}" 00:01:02.026 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:02.285 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.285 + for nvme in "${!nvme_files[@]}" 00:01:02.285 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:02.285 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.285 + for nvme in "${!nvme_files[@]}" 00:01:02.285 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:02.285 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.285 + for nvme in "${!nvme_files[@]}" 00:01:02.285 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:02.544 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.544 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:02.544 + echo 'End stage prepare_nvme.sh' 00:01:02.544 End stage prepare_nvme.sh 00:01:02.556 [Pipeline] sh 00:01:02.837 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:02.837 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:02.837 00:01:02.837 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:02.837 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:02.837 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:02.837 HELP=0 00:01:02.837 DRY_RUN=0 00:01:02.837 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:02.837 NVME_DISKS_TYPE=nvme,nvme, 00:01:02.837 NVME_AUTO_CREATE=0 00:01:02.837 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:02.837 NVME_CMB=,, 00:01:02.837 NVME_PMR=,, 00:01:02.837 NVME_ZNS=,, 00:01:02.837 NVME_MS=,, 00:01:02.837 NVME_FDP=,, 00:01:02.837 SPDK_VAGRANT_DISTRO=fedora39 00:01:02.837 SPDK_VAGRANT_VMCPU=10 00:01:02.837 SPDK_VAGRANT_VMRAM=12288 00:01:02.837 SPDK_VAGRANT_PROVIDER=libvirt 00:01:02.837 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:02.837 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:02.837 SPDK_OPENSTACK_NETWORK=0 00:01:02.837 VAGRANT_PACKAGE_BOX=0 00:01:02.837 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:02.837 FORCE_DISTRO=true 00:01:02.837 VAGRANT_BOX_VERSION= 00:01:02.837 EXTRA_VAGRANTFILES= 00:01:02.837 NIC_MODEL=e1000 00:01:02.837 00:01:02.837 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:02.837 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:06.207 Bringing machine 'default' up with 'libvirt' provider... 00:01:06.466 ==> default: Creating image (snapshot of base box volume). 00:01:06.466 ==> default: Creating domain with the following settings... 00:01:06.466 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733866087_1e55ce1dc39026b6c545 00:01:06.466 ==> default: -- Domain type: kvm 00:01:06.466 ==> default: -- Cpus: 10 00:01:06.466 ==> default: -- Feature: acpi 00:01:06.466 ==> default: -- Feature: apic 00:01:06.466 ==> default: -- Feature: pae 00:01:06.466 ==> default: -- Memory: 12288M 00:01:06.466 ==> default: -- Memory Backing: hugepages: 00:01:06.466 ==> default: -- Management MAC: 00:01:06.466 ==> default: -- Loader: 00:01:06.466 ==> default: -- Nvram: 00:01:06.466 ==> default: -- Base box: spdk/fedora39 00:01:06.466 ==> default: -- Storage pool: default 00:01:06.466 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733866087_1e55ce1dc39026b6c545.img (20G) 00:01:06.466 ==> default: -- Volume Cache: default 00:01:06.466 ==> default: -- Kernel: 00:01:06.466 ==> default: -- Initrd: 00:01:06.466 ==> default: -- Graphics Type: vnc 00:01:06.466 ==> default: -- Graphics Port: -1 00:01:06.466 ==> default: -- Graphics IP: 127.0.0.1 00:01:06.466 ==> default: -- Graphics Password: Not defined 00:01:06.466 ==> default: -- Video Type: cirrus 00:01:06.466 ==> default: -- Video VRAM: 9216 00:01:06.466 ==> default: -- Sound Type: 00:01:06.466 ==> default: -- Keymap: en-us 00:01:06.466 ==> default: -- TPM Path: 00:01:06.466 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:06.466 ==> default: -- Command line args: 00:01:06.466 ==> default: -> value=-device, 00:01:06.466 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:06.466 ==> default: -> value=-drive, 00:01:06.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:06.466 ==> default: -> value=-device, 00:01:06.466 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.466 ==> default: -> value=-device, 00:01:06.466 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:06.466 ==> default: -> value=-drive, 00:01:06.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:06.466 ==> default: -> value=-device, 00:01:06.466 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.466 ==> default: -> value=-drive, 00:01:06.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:06.466 ==> default: -> value=-device, 00:01:06.466 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.466 ==> default: -> value=-drive, 00:01:06.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:06.466 ==> default: -> value=-device, 00:01:06.466 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.727 ==> default: Creating shared folders metadata... 00:01:06.727 ==> default: Starting domain. 00:01:08.104 ==> default: Waiting for domain to get an IP address... 00:01:22.987 ==> default: Waiting for SSH to become available... 00:01:24.365 ==> default: Configuring and enabling network interfaces... 00:01:28.553 default: SSH address: 192.168.121.214:22 00:01:28.553 default: SSH username: vagrant 00:01:28.553 default: SSH auth method: private key 00:01:30.456 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.565 ==> default: Mounting SSHFS shared folder... 00:01:39.499 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:39.499 ==> default: Checking Mount.. 00:01:40.885 ==> default: Folder Successfully Mounted! 00:01:40.885 ==> default: Running provisioner: file... 00:01:41.461 default: ~/.gitconfig => .gitconfig 00:01:42.037 00:01:42.037 SUCCESS! 00:01:42.037 00:01:42.037 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:42.037 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:42.037 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:42.037 00:01:42.046 [Pipeline] } 00:01:42.064 [Pipeline] // stage 00:01:42.074 [Pipeline] dir 00:01:42.074 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:42.076 [Pipeline] { 00:01:42.091 [Pipeline] catchError 00:01:42.093 [Pipeline] { 00:01:42.106 [Pipeline] sh 00:01:42.385 + vagrant ssh-config --host vagrant 00:01:42.385 + sed -ne /^Host/,$p 00:01:42.385 + tee ssh_conf 00:01:46.572 Host vagrant 00:01:46.572 HostName 192.168.121.214 00:01:46.572 User vagrant 00:01:46.572 Port 22 00:01:46.572 UserKnownHostsFile /dev/null 00:01:46.572 StrictHostKeyChecking no 00:01:46.572 PasswordAuthentication no 00:01:46.572 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:46.572 IdentitiesOnly yes 00:01:46.572 LogLevel FATAL 00:01:46.572 ForwardAgent yes 00:01:46.572 ForwardX11 yes 00:01:46.572 00:01:46.587 [Pipeline] withEnv 00:01:46.589 [Pipeline] { 00:01:46.602 [Pipeline] sh 00:01:46.881 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:46.881 source /etc/os-release 00:01:46.881 [[ -e /image.version ]] && img=$(< /image.version) 00:01:46.881 # Minimal, systemd-like check. 00:01:46.881 if [[ -e /.dockerenv ]]; then 00:01:46.881 # Clear garbage from the node's name: 00:01:46.881 # agt-er_autotest_547-896 -> autotest_547-896 00:01:46.881 # $HOSTNAME is the actual container id 00:01:46.881 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:46.881 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:46.881 # We can assume this is a mount from a host where container is running, 00:01:46.881 # so fetch its hostname to easily identify the target swarm worker. 00:01:46.881 container="$(< /etc/hostname) ($agent)" 00:01:46.881 else 00:01:46.881 # Fallback 00:01:46.881 container=$agent 00:01:46.881 fi 00:01:46.881 fi 00:01:46.881 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:46.881 00:01:47.150 [Pipeline] } 00:01:47.165 [Pipeline] // withEnv 00:01:47.173 [Pipeline] setCustomBuildProperty 00:01:47.187 [Pipeline] stage 00:01:47.190 [Pipeline] { (Tests) 00:01:47.205 [Pipeline] sh 00:01:47.485 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.499 [Pipeline] sh 00:01:47.779 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:48.067 [Pipeline] timeout 00:01:48.068 Timeout set to expire in 1 hr 0 min 00:01:48.069 [Pipeline] { 00:01:48.084 [Pipeline] sh 00:01:48.398 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:48.963 HEAD is now at 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:48.974 [Pipeline] sh 00:01:49.252 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:49.522 [Pipeline] sh 00:01:49.800 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:49.815 [Pipeline] sh 00:01:50.093 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:50.093 ++ readlink -f spdk_repo 00:01:50.093 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:50.093 + [[ -n /home/vagrant/spdk_repo ]] 00:01:50.093 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:50.093 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:50.093 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:50.093 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:50.093 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:50.093 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:50.093 + cd /home/vagrant/spdk_repo 00:01:50.093 + source /etc/os-release 00:01:50.093 ++ NAME='Fedora Linux' 00:01:50.093 ++ VERSION='39 (Cloud Edition)' 00:01:50.093 ++ ID=fedora 00:01:50.093 ++ VERSION_ID=39 00:01:50.093 ++ VERSION_CODENAME= 00:01:50.093 ++ PLATFORM_ID=platform:f39 00:01:50.093 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:50.093 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.093 ++ LOGO=fedora-logo-icon 00:01:50.093 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:50.093 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.093 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:50.093 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.093 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.093 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.093 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:50.093 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.093 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:50.093 ++ SUPPORT_END=2024-11-12 00:01:50.093 ++ VARIANT='Cloud Edition' 00:01:50.093 ++ VARIANT_ID=cloud 00:01:50.093 + uname -a 00:01:50.093 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:50.093 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:50.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:50.660 Hugepages 00:01:50.660 node hugesize free / total 00:01:50.660 node0 1048576kB 0 / 0 00:01:50.660 node0 2048kB 0 / 0 00:01:50.660 00:01:50.660 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:50.660 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:50.660 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:50.660 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:50.660 + rm -f /tmp/spdk-ld-path 00:01:50.660 + source autorun-spdk.conf 00:01:50.660 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.660 ++ SPDK_TEST_NVMF=1 00:01:50.660 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.660 ++ SPDK_TEST_URING=1 00:01:50.660 ++ SPDK_TEST_USDT=1 00:01:50.660 ++ SPDK_RUN_UBSAN=1 00:01:50.660 ++ NET_TYPE=virt 00:01:50.660 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.660 ++ RUN_NIGHTLY=0 00:01:50.660 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:50.660 + [[ -n '' ]] 00:01:50.660 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:50.918 + for M in /var/spdk/build-*-manifest.txt 00:01:50.918 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:50.918 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.918 + for M in /var/spdk/build-*-manifest.txt 00:01:50.918 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:50.918 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.918 + for M in /var/spdk/build-*-manifest.txt 00:01:50.918 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:50.918 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.918 ++ uname 00:01:50.918 + [[ Linux == \L\i\n\u\x ]] 00:01:50.918 + sudo dmesg -T 00:01:50.918 + sudo dmesg --clear 00:01:50.918 + dmesg_pid=5266 00:01:50.918 + [[ Fedora Linux == FreeBSD ]] 00:01:50.918 + sudo dmesg -Tw 00:01:50.918 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.918 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.918 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:50.918 + [[ -x /usr/src/fio-static/fio ]] 00:01:50.918 + export FIO_BIN=/usr/src/fio-static/fio 00:01:50.918 + FIO_BIN=/usr/src/fio-static/fio 00:01:50.918 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:50.918 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:50.918 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:50.918 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.918 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.918 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:50.918 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.918 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.918 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.918 21:28:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:50.918 21:28:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.918 21:28:51 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:50.918 21:28:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:50.918 21:28:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.918 21:28:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:50.918 21:28:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:50.918 21:28:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:50.918 21:28:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:50.918 21:28:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:50.918 21:28:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:50.919 21:28:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.919 21:28:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.919 21:28:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.919 21:28:51 -- paths/export.sh@5 -- $ export PATH 00:01:50.919 21:28:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.919 21:28:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:50.919 21:28:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:50.919 21:28:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733866131.XXXXXX 00:01:50.919 21:28:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733866131.xUFSV4 00:01:50.919 21:28:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:50.919 21:28:51 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:50.919 21:28:51 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:50.919 21:28:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:50.919 21:28:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:50.919 21:28:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:50.919 21:28:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:50.919 21:28:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.177 21:28:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:51.177 21:28:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:51.177 21:28:51 -- pm/common@17 -- $ local monitor 00:01:51.177 21:28:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.177 21:28:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.177 21:28:51 -- pm/common@25 -- $ sleep 1 00:01:51.177 21:28:51 -- pm/common@21 -- $ date +%s 00:01:51.177 21:28:51 -- pm/common@21 -- $ date +%s 00:01:51.177 21:28:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733866131 00:01:51.177 21:28:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733866131 00:01:51.177 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733866131_collect-cpu-load.pm.log 00:01:51.177 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733866131_collect-vmstat.pm.log 00:01:52.111 21:28:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:52.111 21:28:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.111 21:28:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.111 21:28:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.111 21:28:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.111 Tue Dec 10 09:28:52 PM UTC 2024 00:01:52.111 21:28:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.111 v25.01-pre-329-g626389917 00:01:52.111 21:28:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:52.111 21:28:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.111 21:28:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.111 21:28:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.111 21:28:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.111 21:28:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.111 ************************************ 00:01:52.111 START TEST ubsan 00:01:52.111 ************************************ 00:01:52.111 using ubsan 00:01:52.111 21:28:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:52.111 00:01:52.111 real 0m0.000s 00:01:52.111 user 0m0.000s 00:01:52.111 sys 0m0.000s 00:01:52.111 21:28:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:52.111 21:28:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.111 ************************************ 00:01:52.111 END TEST ubsan 00:01:52.111 ************************************ 00:01:52.111 21:28:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:52.111 21:28:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:52.111 21:28:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:52.111 21:28:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:52.111 21:28:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:52.111 21:28:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:52.111 21:28:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:52.111 21:28:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:52.111 21:28:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:52.369 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:52.369 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:52.628 Using 'verbs' RDMA provider 00:02:08.479 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:20.725 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:20.725 Creating mk/config.mk...done. 00:02:20.725 Creating mk/cc.flags.mk...done. 00:02:20.725 Type 'make' to build. 00:02:20.725 21:29:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:20.725 21:29:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:20.725 21:29:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:20.725 21:29:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.725 ************************************ 00:02:20.725 START TEST make 00:02:20.725 ************************************ 00:02:20.725 21:29:20 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:20.725 make[1]: Nothing to be done for 'all'. 00:02:32.947 The Meson build system 00:02:32.947 Version: 1.5.0 00:02:32.947 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:32.947 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:32.947 Build type: native build 00:02:32.947 Program cat found: YES (/usr/bin/cat) 00:02:32.947 Project name: DPDK 00:02:32.947 Project version: 24.03.0 00:02:32.947 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.947 C linker for the host machine: cc ld.bfd 2.40-14 00:02:32.947 Host machine cpu family: x86_64 00:02:32.947 Host machine cpu: x86_64 00:02:32.947 Message: ## Building in Developer Mode ## 00:02:32.947 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:32.947 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:32.947 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:32.947 Program python3 found: YES (/usr/bin/python3) 00:02:32.947 Program cat found: YES (/usr/bin/cat) 00:02:32.947 Compiler for C supports arguments -march=native: YES 00:02:32.947 Checking for size of "void *" : 8 00:02:32.947 Checking for size of "void *" : 8 (cached) 00:02:32.947 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:32.947 Library m found: YES 00:02:32.947 Library numa found: YES 00:02:32.947 Has header "numaif.h" : YES 00:02:32.947 Library fdt found: NO 00:02:32.947 Library execinfo found: NO 00:02:32.947 Has header "execinfo.h" : YES 00:02:32.947 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.947 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:32.947 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:32.947 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:32.947 Run-time dependency openssl found: YES 3.1.1 00:02:32.947 Run-time dependency libpcap found: YES 1.10.4 00:02:32.947 Has header "pcap.h" with dependency libpcap: YES 00:02:32.947 Compiler for C supports arguments -Wcast-qual: YES 00:02:32.947 Compiler for C supports arguments -Wdeprecated: YES 00:02:32.947 Compiler for C supports arguments -Wformat: YES 00:02:32.947 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:32.947 Compiler for C supports arguments -Wformat-security: NO 00:02:32.947 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.947 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:32.947 Compiler for C supports arguments -Wnested-externs: YES 00:02:32.947 Compiler for C supports arguments -Wold-style-definition: YES 00:02:32.947 Compiler for C supports arguments -Wpointer-arith: YES 00:02:32.947 Compiler for C supports arguments -Wsign-compare: YES 00:02:32.947 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:32.947 Compiler for C supports arguments -Wundef: YES 00:02:32.947 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.947 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:32.947 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:32.947 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.947 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:32.947 Program objdump found: YES (/usr/bin/objdump) 00:02:32.947 Compiler for C supports arguments -mavx512f: YES 00:02:32.947 Checking if "AVX512 checking" compiles: YES 00:02:32.947 Fetching value of define "__SSE4_2__" : 1 00:02:32.947 Fetching value of define "__AES__" : 1 00:02:32.947 Fetching value of define "__AVX__" : 1 00:02:32.947 Fetching value of define "__AVX2__" : 1 00:02:32.947 Fetching value of define "__AVX512BW__" : (undefined) 00:02:32.947 Fetching value of define "__AVX512CD__" : (undefined) 00:02:32.947 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:32.947 Fetching value of define "__AVX512F__" : (undefined) 00:02:32.947 Fetching value of define "__AVX512VL__" : (undefined) 00:02:32.947 Fetching value of define "__PCLMUL__" : 1 00:02:32.947 Fetching value of define "__RDRND__" : 1 00:02:32.947 Fetching value of define "__RDSEED__" : 1 00:02:32.947 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:32.947 Fetching value of define "__znver1__" : (undefined) 00:02:32.947 Fetching value of define "__znver2__" : (undefined) 00:02:32.947 Fetching value of define "__znver3__" : (undefined) 00:02:32.947 Fetching value of define "__znver4__" : (undefined) 00:02:32.947 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:32.947 Message: lib/log: Defining dependency "log" 00:02:32.947 Message: lib/kvargs: Defining dependency "kvargs" 00:02:32.947 Message: lib/telemetry: Defining dependency "telemetry" 00:02:32.947 Checking for function "getentropy" : NO 00:02:32.948 Message: lib/eal: Defining dependency "eal" 00:02:32.948 Message: lib/ring: Defining dependency "ring" 00:02:32.948 Message: lib/rcu: Defining dependency "rcu" 00:02:32.948 Message: lib/mempool: Defining dependency "mempool" 00:02:32.948 Message: lib/mbuf: Defining dependency "mbuf" 00:02:32.948 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:32.948 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.948 Compiler for C supports arguments -mpclmul: YES 00:02:32.948 Compiler for C supports arguments -maes: YES 00:02:32.948 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.948 Compiler for C supports arguments -mavx512bw: YES 00:02:32.948 Compiler for C supports arguments -mavx512dq: YES 00:02:32.948 Compiler for C supports arguments -mavx512vl: YES 00:02:32.948 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:32.948 Compiler for C supports arguments -mavx2: YES 00:02:32.948 Compiler for C supports arguments -mavx: YES 00:02:32.948 Message: lib/net: Defining dependency "net" 00:02:32.948 Message: lib/meter: Defining dependency "meter" 00:02:32.948 Message: lib/ethdev: Defining dependency "ethdev" 00:02:32.948 Message: lib/pci: Defining dependency "pci" 00:02:32.948 Message: lib/cmdline: Defining dependency "cmdline" 00:02:32.948 Message: lib/hash: Defining dependency "hash" 00:02:32.948 Message: lib/timer: Defining dependency "timer" 00:02:32.948 Message: lib/compressdev: Defining dependency "compressdev" 00:02:32.948 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:32.948 Message: lib/dmadev: Defining dependency "dmadev" 00:02:32.948 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:32.948 Message: lib/power: Defining dependency "power" 00:02:32.948 Message: lib/reorder: Defining dependency "reorder" 00:02:32.948 Message: lib/security: Defining dependency "security" 00:02:32.948 Has header "linux/userfaultfd.h" : YES 00:02:32.948 Has header "linux/vduse.h" : YES 00:02:32.948 Message: lib/vhost: Defining dependency "vhost" 00:02:32.948 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:32.948 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:32.948 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:32.948 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:32.948 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:32.948 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:32.948 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:32.948 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:32.948 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:32.948 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:32.948 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:32.948 Configuring doxy-api-html.conf using configuration 00:02:32.948 Configuring doxy-api-man.conf using configuration 00:02:32.948 Program mandb found: YES (/usr/bin/mandb) 00:02:32.948 Program sphinx-build found: NO 00:02:32.948 Configuring rte_build_config.h using configuration 00:02:32.948 Message: 00:02:32.948 ================= 00:02:32.948 Applications Enabled 00:02:32.948 ================= 00:02:32.948 00:02:32.948 apps: 00:02:32.948 00:02:32.948 00:02:32.948 Message: 00:02:32.948 ================= 00:02:32.948 Libraries Enabled 00:02:32.948 ================= 00:02:32.948 00:02:32.948 libs: 00:02:32.948 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:32.948 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:32.948 cryptodev, dmadev, power, reorder, security, vhost, 00:02:32.948 00:02:32.948 Message: 00:02:32.948 =============== 00:02:32.948 Drivers Enabled 00:02:32.948 =============== 00:02:32.948 00:02:32.948 common: 00:02:32.948 00:02:32.948 bus: 00:02:32.948 pci, vdev, 00:02:32.948 mempool: 00:02:32.948 ring, 00:02:32.948 dma: 00:02:32.948 00:02:32.948 net: 00:02:32.948 00:02:32.948 crypto: 00:02:32.948 00:02:32.948 compress: 00:02:32.948 00:02:32.948 vdpa: 00:02:32.948 00:02:32.948 00:02:32.948 Message: 00:02:32.948 ================= 00:02:32.948 Content Skipped 00:02:32.948 ================= 00:02:32.948 00:02:32.948 apps: 00:02:32.948 dumpcap: explicitly disabled via build config 00:02:32.948 graph: explicitly disabled via build config 00:02:32.948 pdump: explicitly disabled via build config 00:02:32.948 proc-info: explicitly disabled via build config 00:02:32.948 test-acl: explicitly disabled via build config 00:02:32.948 test-bbdev: explicitly disabled via build config 00:02:32.948 test-cmdline: explicitly disabled via build config 00:02:32.948 test-compress-perf: explicitly disabled via build config 00:02:32.948 test-crypto-perf: explicitly disabled via build config 00:02:32.948 test-dma-perf: explicitly disabled via build config 00:02:32.948 test-eventdev: explicitly disabled via build config 00:02:32.948 test-fib: explicitly disabled via build config 00:02:32.948 test-flow-perf: explicitly disabled via build config 00:02:32.948 test-gpudev: explicitly disabled via build config 00:02:32.948 test-mldev: explicitly disabled via build config 00:02:32.948 test-pipeline: explicitly disabled via build config 00:02:32.948 test-pmd: explicitly disabled via build config 00:02:32.948 test-regex: explicitly disabled via build config 00:02:32.948 test-sad: explicitly disabled via build config 00:02:32.948 test-security-perf: explicitly disabled via build config 00:02:32.948 00:02:32.948 libs: 00:02:32.948 argparse: explicitly disabled via build config 00:02:32.948 metrics: explicitly disabled via build config 00:02:32.948 acl: explicitly disabled via build config 00:02:32.948 bbdev: explicitly disabled via build config 00:02:32.948 bitratestats: explicitly disabled via build config 00:02:32.948 bpf: explicitly disabled via build config 00:02:32.948 cfgfile: explicitly disabled via build config 00:02:32.948 distributor: explicitly disabled via build config 00:02:32.948 efd: explicitly disabled via build config 00:02:32.948 eventdev: explicitly disabled via build config 00:02:32.948 dispatcher: explicitly disabled via build config 00:02:32.948 gpudev: explicitly disabled via build config 00:02:32.948 gro: explicitly disabled via build config 00:02:32.948 gso: explicitly disabled via build config 00:02:32.948 ip_frag: explicitly disabled via build config 00:02:32.948 jobstats: explicitly disabled via build config 00:02:32.948 latencystats: explicitly disabled via build config 00:02:32.948 lpm: explicitly disabled via build config 00:02:32.948 member: explicitly disabled via build config 00:02:32.948 pcapng: explicitly disabled via build config 00:02:32.948 rawdev: explicitly disabled via build config 00:02:32.948 regexdev: explicitly disabled via build config 00:02:32.948 mldev: explicitly disabled via build config 00:02:32.948 rib: explicitly disabled via build config 00:02:32.948 sched: explicitly disabled via build config 00:02:32.948 stack: explicitly disabled via build config 00:02:32.948 ipsec: explicitly disabled via build config 00:02:32.948 pdcp: explicitly disabled via build config 00:02:32.948 fib: explicitly disabled via build config 00:02:32.948 port: explicitly disabled via build config 00:02:32.948 pdump: explicitly disabled via build config 00:02:32.948 table: explicitly disabled via build config 00:02:32.948 pipeline: explicitly disabled via build config 00:02:32.948 graph: explicitly disabled via build config 00:02:32.948 node: explicitly disabled via build config 00:02:32.948 00:02:32.948 drivers: 00:02:32.948 common/cpt: not in enabled drivers build config 00:02:32.948 common/dpaax: not in enabled drivers build config 00:02:32.948 common/iavf: not in enabled drivers build config 00:02:32.948 common/idpf: not in enabled drivers build config 00:02:32.948 common/ionic: not in enabled drivers build config 00:02:32.948 common/mvep: not in enabled drivers build config 00:02:32.948 common/octeontx: not in enabled drivers build config 00:02:32.948 bus/auxiliary: not in enabled drivers build config 00:02:32.948 bus/cdx: not in enabled drivers build config 00:02:32.948 bus/dpaa: not in enabled drivers build config 00:02:32.948 bus/fslmc: not in enabled drivers build config 00:02:32.948 bus/ifpga: not in enabled drivers build config 00:02:32.948 bus/platform: not in enabled drivers build config 00:02:32.948 bus/uacce: not in enabled drivers build config 00:02:32.948 bus/vmbus: not in enabled drivers build config 00:02:32.948 common/cnxk: not in enabled drivers build config 00:02:32.948 common/mlx5: not in enabled drivers build config 00:02:32.948 common/nfp: not in enabled drivers build config 00:02:32.948 common/nitrox: not in enabled drivers build config 00:02:32.948 common/qat: not in enabled drivers build config 00:02:32.948 common/sfc_efx: not in enabled drivers build config 00:02:32.948 mempool/bucket: not in enabled drivers build config 00:02:32.948 mempool/cnxk: not in enabled drivers build config 00:02:32.948 mempool/dpaa: not in enabled drivers build config 00:02:32.948 mempool/dpaa2: not in enabled drivers build config 00:02:32.948 mempool/octeontx: not in enabled drivers build config 00:02:32.948 mempool/stack: not in enabled drivers build config 00:02:32.948 dma/cnxk: not in enabled drivers build config 00:02:32.948 dma/dpaa: not in enabled drivers build config 00:02:32.948 dma/dpaa2: not in enabled drivers build config 00:02:32.948 dma/hisilicon: not in enabled drivers build config 00:02:32.948 dma/idxd: not in enabled drivers build config 00:02:32.948 dma/ioat: not in enabled drivers build config 00:02:32.948 dma/skeleton: not in enabled drivers build config 00:02:32.948 net/af_packet: not in enabled drivers build config 00:02:32.948 net/af_xdp: not in enabled drivers build config 00:02:32.948 net/ark: not in enabled drivers build config 00:02:32.948 net/atlantic: not in enabled drivers build config 00:02:32.948 net/avp: not in enabled drivers build config 00:02:32.948 net/axgbe: not in enabled drivers build config 00:02:32.948 net/bnx2x: not in enabled drivers build config 00:02:32.948 net/bnxt: not in enabled drivers build config 00:02:32.948 net/bonding: not in enabled drivers build config 00:02:32.949 net/cnxk: not in enabled drivers build config 00:02:32.949 net/cpfl: not in enabled drivers build config 00:02:32.949 net/cxgbe: not in enabled drivers build config 00:02:32.949 net/dpaa: not in enabled drivers build config 00:02:32.949 net/dpaa2: not in enabled drivers build config 00:02:32.949 net/e1000: not in enabled drivers build config 00:02:32.949 net/ena: not in enabled drivers build config 00:02:32.949 net/enetc: not in enabled drivers build config 00:02:32.949 net/enetfec: not in enabled drivers build config 00:02:32.949 net/enic: not in enabled drivers build config 00:02:32.949 net/failsafe: not in enabled drivers build config 00:02:32.949 net/fm10k: not in enabled drivers build config 00:02:32.949 net/gve: not in enabled drivers build config 00:02:32.949 net/hinic: not in enabled drivers build config 00:02:32.949 net/hns3: not in enabled drivers build config 00:02:32.949 net/i40e: not in enabled drivers build config 00:02:32.949 net/iavf: not in enabled drivers build config 00:02:32.949 net/ice: not in enabled drivers build config 00:02:32.949 net/idpf: not in enabled drivers build config 00:02:32.949 net/igc: not in enabled drivers build config 00:02:32.949 net/ionic: not in enabled drivers build config 00:02:32.949 net/ipn3ke: not in enabled drivers build config 00:02:32.949 net/ixgbe: not in enabled drivers build config 00:02:32.949 net/mana: not in enabled drivers build config 00:02:32.949 net/memif: not in enabled drivers build config 00:02:32.949 net/mlx4: not in enabled drivers build config 00:02:32.949 net/mlx5: not in enabled drivers build config 00:02:32.949 net/mvneta: not in enabled drivers build config 00:02:32.949 net/mvpp2: not in enabled drivers build config 00:02:32.949 net/netvsc: not in enabled drivers build config 00:02:32.949 net/nfb: not in enabled drivers build config 00:02:32.949 net/nfp: not in enabled drivers build config 00:02:32.949 net/ngbe: not in enabled drivers build config 00:02:32.949 net/null: not in enabled drivers build config 00:02:32.949 net/octeontx: not in enabled drivers build config 00:02:32.949 net/octeon_ep: not in enabled drivers build config 00:02:32.949 net/pcap: not in enabled drivers build config 00:02:32.949 net/pfe: not in enabled drivers build config 00:02:32.949 net/qede: not in enabled drivers build config 00:02:32.949 net/ring: not in enabled drivers build config 00:02:32.949 net/sfc: not in enabled drivers build config 00:02:32.949 net/softnic: not in enabled drivers build config 00:02:32.949 net/tap: not in enabled drivers build config 00:02:32.949 net/thunderx: not in enabled drivers build config 00:02:32.949 net/txgbe: not in enabled drivers build config 00:02:32.949 net/vdev_netvsc: not in enabled drivers build config 00:02:32.949 net/vhost: not in enabled drivers build config 00:02:32.949 net/virtio: not in enabled drivers build config 00:02:32.949 net/vmxnet3: not in enabled drivers build config 00:02:32.949 raw/*: missing internal dependency, "rawdev" 00:02:32.949 crypto/armv8: not in enabled drivers build config 00:02:32.949 crypto/bcmfs: not in enabled drivers build config 00:02:32.949 crypto/caam_jr: not in enabled drivers build config 00:02:32.949 crypto/ccp: not in enabled drivers build config 00:02:32.949 crypto/cnxk: not in enabled drivers build config 00:02:32.949 crypto/dpaa_sec: not in enabled drivers build config 00:02:32.949 crypto/dpaa2_sec: not in enabled drivers build config 00:02:32.949 crypto/ipsec_mb: not in enabled drivers build config 00:02:32.949 crypto/mlx5: not in enabled drivers build config 00:02:32.949 crypto/mvsam: not in enabled drivers build config 00:02:32.949 crypto/nitrox: not in enabled drivers build config 00:02:32.949 crypto/null: not in enabled drivers build config 00:02:32.949 crypto/octeontx: not in enabled drivers build config 00:02:32.949 crypto/openssl: not in enabled drivers build config 00:02:32.949 crypto/scheduler: not in enabled drivers build config 00:02:32.949 crypto/uadk: not in enabled drivers build config 00:02:32.949 crypto/virtio: not in enabled drivers build config 00:02:32.949 compress/isal: not in enabled drivers build config 00:02:32.949 compress/mlx5: not in enabled drivers build config 00:02:32.949 compress/nitrox: not in enabled drivers build config 00:02:32.949 compress/octeontx: not in enabled drivers build config 00:02:32.949 compress/zlib: not in enabled drivers build config 00:02:32.949 regex/*: missing internal dependency, "regexdev" 00:02:32.949 ml/*: missing internal dependency, "mldev" 00:02:32.949 vdpa/ifc: not in enabled drivers build config 00:02:32.949 vdpa/mlx5: not in enabled drivers build config 00:02:32.949 vdpa/nfp: not in enabled drivers build config 00:02:32.949 vdpa/sfc: not in enabled drivers build config 00:02:32.949 event/*: missing internal dependency, "eventdev" 00:02:32.949 baseband/*: missing internal dependency, "bbdev" 00:02:32.949 gpu/*: missing internal dependency, "gpudev" 00:02:32.949 00:02:32.949 00:02:32.949 Build targets in project: 85 00:02:32.949 00:02:32.949 DPDK 24.03.0 00:02:32.949 00:02:32.949 User defined options 00:02:32.949 buildtype : debug 00:02:32.949 default_library : shared 00:02:32.949 libdir : lib 00:02:32.949 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:32.949 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:32.949 c_link_args : 00:02:32.949 cpu_instruction_set: native 00:02:32.949 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:32.949 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:32.949 enable_docs : false 00:02:32.949 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:32.949 enable_kmods : false 00:02:32.949 max_lcores : 128 00:02:32.949 tests : false 00:02:32.949 00:02:32.949 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.516 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:33.516 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:33.516 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:33.516 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.516 [4/268] Linking static target lib/librte_kvargs.a 00:02:33.774 [5/268] Linking static target lib/librte_log.a 00:02:33.774 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.341 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.341 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.341 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.599 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.599 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.599 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.599 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.599 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.599 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.599 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:34.599 [17/268] Linking static target lib/librte_telemetry.a 00:02:34.857 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.857 [19/268] Linking target lib/librte_log.so.24.1 00:02:34.857 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.423 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.423 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:35.423 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.681 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.681 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.681 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.681 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.681 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.681 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.681 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.681 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:35.939 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.939 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.939 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.939 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.197 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.456 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.456 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.715 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.715 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.715 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.715 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.715 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.715 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.715 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.973 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.231 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.231 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.490 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.490 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.490 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.748 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.748 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.748 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.007 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.007 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.266 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.266 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.266 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.524 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.524 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.524 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.524 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.782 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.040 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.298 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.298 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.298 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.557 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.557 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.557 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.557 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.815 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.815 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.815 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.815 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.397 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.397 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.679 [79/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.679 [80/268] Linking static target lib/librte_ring.a 00:02:40.679 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.679 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.937 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.937 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.937 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.937 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.937 [87/268] Linking static target lib/librte_eal.a 00:02:41.195 [88/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.195 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.195 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.195 [91/268] Linking static target lib/librte_rcu.a 00:02:41.454 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.712 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.970 [94/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.970 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.970 [96/268] Linking static target lib/librte_mempool.a 00:02:41.970 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.970 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:41.970 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:42.228 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.228 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.488 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.488 [103/268] Linking static target lib/librte_mbuf.a 00:02:42.748 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.748 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.006 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:43.006 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.006 [108/268] Linking static target lib/librte_meter.a 00:02:43.006 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.264 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.264 [111/268] Linking static target lib/librte_net.a 00:02:43.522 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.522 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.522 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.781 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.781 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.781 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.781 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.039 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.605 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.605 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.863 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.863 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.863 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.863 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.121 [126/268] Linking static target lib/librte_pci.a 00:02:45.121 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.379 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.379 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.379 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.379 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.637 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.637 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.637 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.637 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.637 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.637 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.637 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.637 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.637 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.896 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.896 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.896 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.896 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:46.462 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.462 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.462 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.462 [148/268] Linking static target lib/librte_cmdline.a 00:02:46.462 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.462 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.720 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.978 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.978 [153/268] Linking static target lib/librte_ethdev.a 00:02:47.237 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.237 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.237 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.237 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.237 [158/268] Linking static target lib/librte_hash.a 00:02:47.495 [159/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.495 [160/268] Linking static target lib/librte_timer.a 00:02:47.753 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.753 [162/268] Linking static target lib/librte_compressdev.a 00:02:47.753 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.012 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.270 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.270 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.270 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.270 [168/268] Linking static target lib/librte_dmadev.a 00:02:48.528 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.528 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:48.528 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:48.786 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.786 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.786 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.786 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.045 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.045 [177/268] Linking static target lib/librte_cryptodev.a 00:02:49.303 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.303 [179/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.303 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.562 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.562 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.140 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.140 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:50.140 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.140 [186/268] Linking static target lib/librte_power.a 00:02:50.140 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.140 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.140 [189/268] Linking static target lib/librte_reorder.a 00:02:50.424 [190/268] Linking static target lib/librte_security.a 00:02:50.682 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.682 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.940 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.198 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.457 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.457 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.715 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.715 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.715 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.281 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.281 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.539 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.539 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.539 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.797 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.797 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.055 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.055 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.055 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.055 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.314 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.572 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.572 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.572 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.572 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.572 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:53.830 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.830 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.830 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.830 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.830 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.830 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:54.088 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.088 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.088 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.088 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.088 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:54.652 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.217 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.476 [230/268] Linking target lib/librte_eal.so.24.1 00:02:55.476 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:55.476 [232/268] Linking target lib/librte_pci.so.24.1 00:02:55.734 [233/268] Linking target lib/librte_ring.so.24.1 00:02:55.734 [234/268] Linking target lib/librte_meter.so.24.1 00:02:55.734 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:55.734 [236/268] Linking target lib/librte_timer.so.24.1 00:02:55.734 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:55.734 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:55.734 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:55.734 [240/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:55.992 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:55.992 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:55.992 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:55.992 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:55.992 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:55.992 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.992 [247/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.250 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:56.250 [249/268] Linking static target lib/librte_vhost.a 00:02:56.250 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:56.250 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:56.250 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:56.250 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:56.250 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:56.250 [255/268] Linking target lib/librte_net.so.24.1 00:02:56.508 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:56.508 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:56.508 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:56.508 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:56.508 [260/268] Linking target lib/librte_hash.so.24.1 00:02:56.508 [261/268] Linking target lib/librte_security.so.24.1 00:02:56.766 [262/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.766 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:56.766 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:57.024 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:57.024 [266/268] Linking target lib/librte_power.so.24.1 00:02:57.590 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.590 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:57.590 INFO: autodetecting backend as ninja 00:02:57.590 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:29.667 CC lib/ut/ut.o 00:03:29.667 CC lib/ut_mock/mock.o 00:03:29.667 CC lib/log/log_flags.o 00:03:29.667 CC lib/log/log_deprecated.o 00:03:29.667 CC lib/log/log.o 00:03:29.667 LIB libspdk_ut.a 00:03:29.667 LIB libspdk_ut_mock.a 00:03:29.667 LIB libspdk_log.a 00:03:29.667 SO libspdk_ut.so.2.0 00:03:29.667 SO libspdk_ut_mock.so.6.0 00:03:29.667 SO libspdk_log.so.7.1 00:03:29.667 SYMLINK libspdk_ut.so 00:03:29.667 SYMLINK libspdk_ut_mock.so 00:03:29.667 SYMLINK libspdk_log.so 00:03:29.667 CC lib/ioat/ioat.o 00:03:29.667 CXX lib/trace_parser/trace.o 00:03:29.667 CC lib/dma/dma.o 00:03:29.667 CC lib/util/bit_array.o 00:03:29.667 CC lib/util/base64.o 00:03:29.667 CC lib/util/cpuset.o 00:03:29.667 CC lib/util/crc16.o 00:03:29.667 CC lib/util/crc32.o 00:03:29.667 CC lib/util/crc32c.o 00:03:29.667 CC lib/vfio_user/host/vfio_user_pci.o 00:03:29.667 CC lib/vfio_user/host/vfio_user.o 00:03:29.667 CC lib/util/crc32_ieee.o 00:03:29.667 CC lib/util/crc64.o 00:03:29.667 CC lib/util/dif.o 00:03:29.667 CC lib/util/fd.o 00:03:29.667 LIB libspdk_dma.a 00:03:29.667 CC lib/util/fd_group.o 00:03:29.667 SO libspdk_dma.so.5.0 00:03:29.667 LIB libspdk_ioat.a 00:03:29.667 SO libspdk_ioat.so.7.0 00:03:29.667 SYMLINK libspdk_dma.so 00:03:29.667 CC lib/util/file.o 00:03:29.667 CC lib/util/hexlify.o 00:03:29.667 CC lib/util/iov.o 00:03:29.667 SYMLINK libspdk_ioat.so 00:03:29.667 CC lib/util/math.o 00:03:29.667 CC lib/util/net.o 00:03:29.667 CC lib/util/pipe.o 00:03:29.667 LIB libspdk_vfio_user.a 00:03:29.667 SO libspdk_vfio_user.so.5.0 00:03:29.926 SYMLINK libspdk_vfio_user.so 00:03:29.926 CC lib/util/strerror_tls.o 00:03:29.926 CC lib/util/string.o 00:03:29.926 CC lib/util/uuid.o 00:03:29.926 CC lib/util/xor.o 00:03:29.926 CC lib/util/zipf.o 00:03:29.926 CC lib/util/md5.o 00:03:30.184 LIB libspdk_util.a 00:03:30.442 SO libspdk_util.so.10.1 00:03:30.442 LIB libspdk_trace_parser.a 00:03:30.442 SO libspdk_trace_parser.so.6.0 00:03:30.442 SYMLINK libspdk_util.so 00:03:30.442 SYMLINK libspdk_trace_parser.so 00:03:30.700 CC lib/json/json_parse.o 00:03:30.700 CC lib/json/json_util.o 00:03:30.700 CC lib/json/json_write.o 00:03:30.700 CC lib/vmd/vmd.o 00:03:30.700 CC lib/idxd/idxd.o 00:03:30.700 CC lib/vmd/led.o 00:03:30.700 CC lib/idxd/idxd_user.o 00:03:30.700 CC lib/rdma_utils/rdma_utils.o 00:03:30.700 CC lib/conf/conf.o 00:03:30.700 CC lib/env_dpdk/env.o 00:03:30.700 CC lib/env_dpdk/memory.o 00:03:30.958 LIB libspdk_conf.a 00:03:30.958 CC lib/env_dpdk/pci.o 00:03:30.958 CC lib/idxd/idxd_kernel.o 00:03:30.958 CC lib/env_dpdk/init.o 00:03:30.958 SO libspdk_conf.so.6.0 00:03:30.958 LIB libspdk_rdma_utils.a 00:03:30.958 LIB libspdk_json.a 00:03:30.958 SO libspdk_rdma_utils.so.1.0 00:03:30.958 SYMLINK libspdk_conf.so 00:03:30.959 SO libspdk_json.so.6.0 00:03:30.959 CC lib/env_dpdk/threads.o 00:03:30.959 SYMLINK libspdk_rdma_utils.so 00:03:30.959 CC lib/env_dpdk/pci_ioat.o 00:03:30.959 SYMLINK libspdk_json.so 00:03:31.217 CC lib/env_dpdk/pci_virtio.o 00:03:31.217 CC lib/env_dpdk/pci_vmd.o 00:03:31.217 CC lib/rdma_provider/common.o 00:03:31.217 LIB libspdk_idxd.a 00:03:31.217 CC lib/jsonrpc/jsonrpc_server.o 00:03:31.217 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:31.217 SO libspdk_idxd.so.12.1 00:03:31.217 LIB libspdk_vmd.a 00:03:31.217 CC lib/jsonrpc/jsonrpc_client.o 00:03:31.217 SO libspdk_vmd.so.6.0 00:03:31.217 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:31.217 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:31.217 SYMLINK libspdk_idxd.so 00:03:31.217 CC lib/env_dpdk/pci_idxd.o 00:03:31.476 SYMLINK libspdk_vmd.so 00:03:31.476 CC lib/env_dpdk/pci_event.o 00:03:31.476 CC lib/env_dpdk/sigbus_handler.o 00:03:31.476 CC lib/env_dpdk/pci_dpdk.o 00:03:31.476 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:31.476 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:31.476 LIB libspdk_jsonrpc.a 00:03:31.476 LIB libspdk_rdma_provider.a 00:03:31.476 SO libspdk_jsonrpc.so.6.0 00:03:31.476 SO libspdk_rdma_provider.so.7.0 00:03:31.734 SYMLINK libspdk_jsonrpc.so 00:03:31.734 SYMLINK libspdk_rdma_provider.so 00:03:31.993 CC lib/rpc/rpc.o 00:03:31.993 LIB libspdk_env_dpdk.a 00:03:32.251 LIB libspdk_rpc.a 00:03:32.251 SO libspdk_env_dpdk.so.15.1 00:03:32.251 SO libspdk_rpc.so.6.0 00:03:32.251 SYMLINK libspdk_rpc.so 00:03:32.251 SYMLINK libspdk_env_dpdk.so 00:03:32.510 CC lib/notify/notify.o 00:03:32.510 CC lib/notify/notify_rpc.o 00:03:32.510 CC lib/trace/trace.o 00:03:32.510 CC lib/trace/trace_flags.o 00:03:32.510 CC lib/trace/trace_rpc.o 00:03:32.510 CC lib/keyring/keyring.o 00:03:32.510 CC lib/keyring/keyring_rpc.o 00:03:32.510 LIB libspdk_notify.a 00:03:32.769 SO libspdk_notify.so.6.0 00:03:32.769 SYMLINK libspdk_notify.so 00:03:32.769 LIB libspdk_keyring.a 00:03:32.769 LIB libspdk_trace.a 00:03:32.769 SO libspdk_keyring.so.2.0 00:03:32.769 SO libspdk_trace.so.11.0 00:03:32.769 SYMLINK libspdk_keyring.so 00:03:32.769 SYMLINK libspdk_trace.so 00:03:33.028 CC lib/sock/sock.o 00:03:33.028 CC lib/thread/thread.o 00:03:33.028 CC lib/thread/iobuf.o 00:03:33.028 CC lib/sock/sock_rpc.o 00:03:33.594 LIB libspdk_sock.a 00:03:33.594 SO libspdk_sock.so.10.0 00:03:33.853 SYMLINK libspdk_sock.so 00:03:34.112 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:34.112 CC lib/nvme/nvme_fabric.o 00:03:34.112 CC lib/nvme/nvme_ctrlr.o 00:03:34.112 CC lib/nvme/nvme_ns.o 00:03:34.112 CC lib/nvme/nvme_ns_cmd.o 00:03:34.112 CC lib/nvme/nvme_pcie_common.o 00:03:34.112 CC lib/nvme/nvme_pcie.o 00:03:34.112 CC lib/nvme/nvme_qpair.o 00:03:34.112 CC lib/nvme/nvme.o 00:03:34.680 LIB libspdk_thread.a 00:03:34.939 CC lib/nvme/nvme_quirks.o 00:03:34.939 SO libspdk_thread.so.11.0 00:03:34.939 CC lib/nvme/nvme_transport.o 00:03:34.939 SYMLINK libspdk_thread.so 00:03:34.939 CC lib/nvme/nvme_discovery.o 00:03:34.939 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:34.939 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:34.939 CC lib/nvme/nvme_tcp.o 00:03:35.199 CC lib/nvme/nvme_opal.o 00:03:35.199 CC lib/nvme/nvme_io_msg.o 00:03:35.199 CC lib/nvme/nvme_poll_group.o 00:03:35.458 CC lib/nvme/nvme_zns.o 00:03:35.458 CC lib/nvme/nvme_stubs.o 00:03:35.718 CC lib/nvme/nvme_auth.o 00:03:35.718 CC lib/nvme/nvme_cuse.o 00:03:35.718 CC lib/nvme/nvme_rdma.o 00:03:35.718 CC lib/accel/accel.o 00:03:35.976 CC lib/blob/blobstore.o 00:03:35.976 CC lib/init/json_config.o 00:03:36.240 CC lib/init/subsystem.o 00:03:36.240 CC lib/virtio/virtio.o 00:03:36.240 CC lib/init/subsystem_rpc.o 00:03:36.240 CC lib/init/rpc.o 00:03:36.532 CC lib/accel/accel_rpc.o 00:03:36.532 LIB libspdk_init.a 00:03:36.532 CC lib/virtio/virtio_vhost_user.o 00:03:36.532 SO libspdk_init.so.6.0 00:03:36.532 CC lib/virtio/virtio_vfio_user.o 00:03:36.532 CC lib/virtio/virtio_pci.o 00:03:36.799 CC lib/accel/accel_sw.o 00:03:36.799 CC lib/blob/request.o 00:03:36.799 SYMLINK libspdk_init.so 00:03:36.799 CC lib/blob/zeroes.o 00:03:36.799 CC lib/fsdev/fsdev.o 00:03:36.799 CC lib/blob/blob_bs_dev.o 00:03:36.799 CC lib/fsdev/fsdev_io.o 00:03:36.799 CC lib/fsdev/fsdev_rpc.o 00:03:37.057 LIB libspdk_virtio.a 00:03:37.057 SO libspdk_virtio.so.7.0 00:03:37.057 LIB libspdk_accel.a 00:03:37.057 SO libspdk_accel.so.16.0 00:03:37.057 SYMLINK libspdk_virtio.so 00:03:37.057 CC lib/event/app.o 00:03:37.057 CC lib/event/reactor.o 00:03:37.057 CC lib/event/log_rpc.o 00:03:37.057 CC lib/event/app_rpc.o 00:03:37.057 CC lib/event/scheduler_static.o 00:03:37.057 SYMLINK libspdk_accel.so 00:03:37.316 LIB libspdk_nvme.a 00:03:37.316 CC lib/bdev/bdev.o 00:03:37.316 CC lib/bdev/bdev_rpc.o 00:03:37.316 CC lib/bdev/bdev_zone.o 00:03:37.316 CC lib/bdev/part.o 00:03:37.316 CC lib/bdev/scsi_nvme.o 00:03:37.316 LIB libspdk_fsdev.a 00:03:37.575 SO libspdk_nvme.so.15.0 00:03:37.575 SO libspdk_fsdev.so.2.0 00:03:37.575 SYMLINK libspdk_fsdev.so 00:03:37.575 LIB libspdk_event.a 00:03:37.575 SO libspdk_event.so.14.0 00:03:37.575 SYMLINK libspdk_event.so 00:03:37.575 SYMLINK libspdk_nvme.so 00:03:37.833 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:38.401 LIB libspdk_fuse_dispatcher.a 00:03:38.401 SO libspdk_fuse_dispatcher.so.1.0 00:03:38.401 SYMLINK libspdk_fuse_dispatcher.so 00:03:38.969 LIB libspdk_blob.a 00:03:38.969 SO libspdk_blob.so.12.0 00:03:39.229 SYMLINK libspdk_blob.so 00:03:39.487 CC lib/lvol/lvol.o 00:03:39.487 CC lib/blobfs/tree.o 00:03:39.487 CC lib/blobfs/blobfs.o 00:03:40.055 LIB libspdk_bdev.a 00:03:40.055 SO libspdk_bdev.so.17.0 00:03:40.314 SYMLINK libspdk_bdev.so 00:03:40.314 LIB libspdk_blobfs.a 00:03:40.314 CC lib/ftl/ftl_core.o 00:03:40.314 CC lib/ftl/ftl_init.o 00:03:40.314 CC lib/ftl/ftl_layout.o 00:03:40.314 CC lib/ftl/ftl_debug.o 00:03:40.314 CC lib/ublk/ublk.o 00:03:40.314 CC lib/scsi/dev.o 00:03:40.314 SO libspdk_blobfs.so.11.0 00:03:40.314 CC lib/nvmf/ctrlr.o 00:03:40.314 CC lib/nbd/nbd.o 00:03:40.314 LIB libspdk_lvol.a 00:03:40.572 SO libspdk_lvol.so.11.0 00:03:40.572 SYMLINK libspdk_blobfs.so 00:03:40.572 CC lib/nvmf/ctrlr_discovery.o 00:03:40.572 SYMLINK libspdk_lvol.so 00:03:40.573 CC lib/nvmf/ctrlr_bdev.o 00:03:40.573 CC lib/ftl/ftl_io.o 00:03:40.573 CC lib/ublk/ublk_rpc.o 00:03:40.573 CC lib/scsi/lun.o 00:03:40.831 CC lib/scsi/port.o 00:03:40.831 CC lib/scsi/scsi.o 00:03:40.831 CC lib/nbd/nbd_rpc.o 00:03:40.831 CC lib/ftl/ftl_sb.o 00:03:40.831 CC lib/scsi/scsi_bdev.o 00:03:41.090 CC lib/scsi/scsi_pr.o 00:03:41.090 CC lib/nvmf/subsystem.o 00:03:41.090 CC lib/nvmf/nvmf.o 00:03:41.090 CC lib/nvmf/nvmf_rpc.o 00:03:41.090 LIB libspdk_nbd.a 00:03:41.090 SO libspdk_nbd.so.7.0 00:03:41.090 CC lib/ftl/ftl_l2p.o 00:03:41.090 LIB libspdk_ublk.a 00:03:41.090 SYMLINK libspdk_nbd.so 00:03:41.090 CC lib/ftl/ftl_l2p_flat.o 00:03:41.090 SO libspdk_ublk.so.3.0 00:03:41.090 SYMLINK libspdk_ublk.so 00:03:41.090 CC lib/ftl/ftl_nv_cache.o 00:03:41.349 CC lib/nvmf/transport.o 00:03:41.349 CC lib/nvmf/tcp.o 00:03:41.349 CC lib/nvmf/stubs.o 00:03:41.349 CC lib/ftl/ftl_band.o 00:03:41.349 CC lib/scsi/scsi_rpc.o 00:03:41.608 CC lib/scsi/task.o 00:03:41.867 CC lib/nvmf/mdns_server.o 00:03:41.867 CC lib/nvmf/rdma.o 00:03:41.867 LIB libspdk_scsi.a 00:03:41.867 CC lib/nvmf/auth.o 00:03:41.867 SO libspdk_scsi.so.9.0 00:03:41.867 CC lib/ftl/ftl_band_ops.o 00:03:41.867 CC lib/ftl/ftl_writer.o 00:03:41.867 SYMLINK libspdk_scsi.so 00:03:41.867 CC lib/ftl/ftl_rq.o 00:03:42.126 CC lib/ftl/ftl_reloc.o 00:03:42.126 CC lib/ftl/ftl_l2p_cache.o 00:03:42.126 CC lib/ftl/ftl_p2l.o 00:03:42.126 CC lib/ftl/ftl_p2l_log.o 00:03:42.384 CC lib/ftl/mngt/ftl_mngt.o 00:03:42.384 CC lib/iscsi/conn.o 00:03:42.384 CC lib/vhost/vhost.o 00:03:42.643 CC lib/vhost/vhost_rpc.o 00:03:42.643 CC lib/iscsi/init_grp.o 00:03:42.643 CC lib/iscsi/iscsi.o 00:03:42.643 CC lib/vhost/vhost_scsi.o 00:03:42.643 CC lib/iscsi/param.o 00:03:42.643 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:42.901 CC lib/iscsi/portal_grp.o 00:03:42.901 CC lib/vhost/vhost_blk.o 00:03:42.901 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:42.901 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:42.901 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:43.159 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:43.159 CC lib/vhost/rte_vhost_user.o 00:03:43.159 CC lib/iscsi/tgt_node.o 00:03:43.159 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:43.418 CC lib/iscsi/iscsi_subsystem.o 00:03:43.418 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:43.418 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:43.418 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:43.418 CC lib/iscsi/iscsi_rpc.o 00:03:43.676 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.676 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.676 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:43.935 CC lib/iscsi/task.o 00:03:43.935 CC lib/ftl/utils/ftl_conf.o 00:03:43.935 CC lib/ftl/utils/ftl_md.o 00:03:43.935 LIB libspdk_nvmf.a 00:03:43.935 CC lib/ftl/utils/ftl_mempool.o 00:03:43.935 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.935 SO libspdk_nvmf.so.20.0 00:03:43.935 CC lib/ftl/utils/ftl_property.o 00:03:43.935 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:44.193 LIB libspdk_iscsi.a 00:03:44.193 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:44.193 SO libspdk_iscsi.so.8.0 00:03:44.193 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:44.193 SYMLINK libspdk_nvmf.so 00:03:44.193 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:44.193 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:44.193 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:44.452 SYMLINK libspdk_iscsi.so 00:03:44.452 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:44.452 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:44.452 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:44.452 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.452 LIB libspdk_vhost.a 00:03:44.452 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.452 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:44.452 SO libspdk_vhost.so.8.0 00:03:44.452 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:44.452 CC lib/ftl/base/ftl_base_dev.o 00:03:44.452 CC lib/ftl/base/ftl_base_bdev.o 00:03:44.452 CC lib/ftl/ftl_trace.o 00:03:44.452 SYMLINK libspdk_vhost.so 00:03:44.710 LIB libspdk_ftl.a 00:03:44.967 SO libspdk_ftl.so.9.0 00:03:45.225 SYMLINK libspdk_ftl.so 00:03:45.791 CC module/env_dpdk/env_dpdk_rpc.o 00:03:45.791 CC module/accel/ioat/accel_ioat.o 00:03:45.791 CC module/blob/bdev/blob_bdev.o 00:03:45.791 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:45.791 CC module/accel/dsa/accel_dsa.o 00:03:45.791 CC module/accel/iaa/accel_iaa.o 00:03:45.791 CC module/keyring/file/keyring.o 00:03:45.791 CC module/fsdev/aio/fsdev_aio.o 00:03:45.791 CC module/sock/posix/posix.o 00:03:45.791 CC module/accel/error/accel_error.o 00:03:45.791 LIB libspdk_env_dpdk_rpc.a 00:03:45.791 SO libspdk_env_dpdk_rpc.so.6.0 00:03:45.791 SYMLINK libspdk_env_dpdk_rpc.so 00:03:45.791 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.049 CC module/keyring/file/keyring_rpc.o 00:03:46.049 CC module/accel/error/accel_error_rpc.o 00:03:46.049 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.049 LIB libspdk_scheduler_dynamic.a 00:03:46.049 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.049 SO libspdk_scheduler_dynamic.so.4.0 00:03:46.049 LIB libspdk_keyring_file.a 00:03:46.049 LIB libspdk_blob_bdev.a 00:03:46.049 LIB libspdk_accel_dsa.a 00:03:46.049 SO libspdk_keyring_file.so.2.0 00:03:46.049 SO libspdk_blob_bdev.so.12.0 00:03:46.049 SYMLINK libspdk_scheduler_dynamic.so 00:03:46.049 LIB libspdk_accel_error.a 00:03:46.049 LIB libspdk_accel_iaa.a 00:03:46.308 SO libspdk_accel_dsa.so.5.0 00:03:46.308 LIB libspdk_accel_ioat.a 00:03:46.308 SYMLINK libspdk_keyring_file.so 00:03:46.308 SYMLINK libspdk_blob_bdev.so 00:03:46.308 SO libspdk_accel_error.so.2.0 00:03:46.308 SO libspdk_accel_iaa.so.3.0 00:03:46.308 SO libspdk_accel_ioat.so.6.0 00:03:46.308 SYMLINK libspdk_accel_dsa.so 00:03:46.308 SYMLINK libspdk_accel_iaa.so 00:03:46.308 SYMLINK libspdk_accel_error.so 00:03:46.308 SYMLINK libspdk_accel_ioat.so 00:03:46.308 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:46.308 CC module/fsdev/aio/linux_aio_mgr.o 00:03:46.308 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.308 CC module/keyring/linux/keyring.o 00:03:46.308 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.308 CC module/sock/uring/uring.o 00:03:46.567 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.567 CC module/bdev/delay/vbdev_delay.o 00:03:46.567 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:46.567 CC module/keyring/linux/keyring_rpc.o 00:03:46.567 LIB libspdk_fsdev_aio.a 00:03:46.567 CC module/blobfs/bdev/blobfs_bdev.o 00:03:46.567 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.567 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:46.567 LIB libspdk_scheduler_gscheduler.a 00:03:46.567 SO libspdk_fsdev_aio.so.1.0 00:03:46.567 SO libspdk_scheduler_gscheduler.so.4.0 00:03:46.567 LIB libspdk_sock_posix.a 00:03:46.567 CC module/bdev/error/vbdev_error.o 00:03:46.567 SO libspdk_sock_posix.so.6.0 00:03:46.567 SYMLINK libspdk_fsdev_aio.so 00:03:46.567 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.567 LIB libspdk_keyring_linux.a 00:03:46.825 CC module/bdev/gpt/gpt.o 00:03:46.825 SO libspdk_keyring_linux.so.1.0 00:03:46.825 SYMLINK libspdk_sock_posix.so 00:03:46.825 LIB libspdk_blobfs_bdev.a 00:03:46.825 SYMLINK libspdk_keyring_linux.so 00:03:46.825 SO libspdk_blobfs_bdev.so.6.0 00:03:46.825 CC module/bdev/lvol/vbdev_lvol.o 00:03:46.825 CC module/bdev/malloc/bdev_malloc.o 00:03:46.825 SYMLINK libspdk_blobfs_bdev.so 00:03:46.825 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:46.825 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.825 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.825 CC module/bdev/null/bdev_null.o 00:03:46.825 CC module/bdev/nvme/bdev_nvme.o 00:03:46.825 CC module/bdev/error/vbdev_error_rpc.o 00:03:47.084 CC module/bdev/passthru/vbdev_passthru.o 00:03:47.084 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:47.084 LIB libspdk_bdev_delay.a 00:03:47.084 LIB libspdk_bdev_error.a 00:03:47.084 SO libspdk_bdev_delay.so.6.0 00:03:47.084 LIB libspdk_sock_uring.a 00:03:47.084 SO libspdk_bdev_error.so.6.0 00:03:47.084 SO libspdk_sock_uring.so.5.0 00:03:47.084 SYMLINK libspdk_bdev_delay.so 00:03:47.084 CC module/bdev/null/bdev_null_rpc.o 00:03:47.084 LIB libspdk_bdev_gpt.a 00:03:47.343 SYMLINK libspdk_bdev_error.so 00:03:47.343 SYMLINK libspdk_sock_uring.so 00:03:47.343 SO libspdk_bdev_gpt.so.6.0 00:03:47.343 LIB libspdk_bdev_malloc.a 00:03:47.343 LIB libspdk_bdev_passthru.a 00:03:47.343 SO libspdk_bdev_malloc.so.6.0 00:03:47.343 SYMLINK libspdk_bdev_gpt.so 00:03:47.343 SO libspdk_bdev_passthru.so.6.0 00:03:47.343 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:47.343 CC module/bdev/raid/bdev_raid.o 00:03:47.343 SYMLINK libspdk_bdev_malloc.so 00:03:47.343 LIB libspdk_bdev_null.a 00:03:47.343 CC module/bdev/split/vbdev_split.o 00:03:47.343 SYMLINK libspdk_bdev_passthru.so 00:03:47.343 CC module/bdev/uring/bdev_uring.o 00:03:47.343 CC module/bdev/uring/bdev_uring_rpc.o 00:03:47.343 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:47.343 SO libspdk_bdev_null.so.6.0 00:03:47.601 CC module/bdev/aio/bdev_aio.o 00:03:47.601 SYMLINK libspdk_bdev_null.so 00:03:47.601 CC module/bdev/ftl/bdev_ftl.o 00:03:47.601 CC module/bdev/split/vbdev_split_rpc.o 00:03:47.601 CC module/bdev/iscsi/bdev_iscsi.o 00:03:47.860 LIB libspdk_bdev_lvol.a 00:03:47.860 SO libspdk_bdev_lvol.so.6.0 00:03:47.860 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:47.860 LIB libspdk_bdev_uring.a 00:03:47.860 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:47.860 SO libspdk_bdev_uring.so.6.0 00:03:47.860 LIB libspdk_bdev_split.a 00:03:47.860 SYMLINK libspdk_bdev_lvol.so 00:03:47.860 CC module/bdev/aio/bdev_aio_rpc.o 00:03:47.860 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:47.860 SYMLINK libspdk_bdev_uring.so 00:03:47.860 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:47.860 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:47.860 SO libspdk_bdev_split.so.6.0 00:03:48.118 LIB libspdk_bdev_zone_block.a 00:03:48.118 SO libspdk_bdev_zone_block.so.6.0 00:03:48.118 LIB libspdk_bdev_aio.a 00:03:48.118 SYMLINK libspdk_bdev_split.so 00:03:48.118 CC module/bdev/raid/bdev_raid_rpc.o 00:03:48.118 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:48.118 CC module/bdev/raid/bdev_raid_sb.o 00:03:48.118 LIB libspdk_bdev_ftl.a 00:03:48.118 SO libspdk_bdev_aio.so.6.0 00:03:48.118 SYMLINK libspdk_bdev_zone_block.so 00:03:48.118 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:48.118 CC module/bdev/nvme/nvme_rpc.o 00:03:48.118 SO libspdk_bdev_ftl.so.6.0 00:03:48.118 SYMLINK libspdk_bdev_aio.so 00:03:48.118 CC module/bdev/nvme/bdev_mdns_client.o 00:03:48.118 SYMLINK libspdk_bdev_ftl.so 00:03:48.377 CC module/bdev/raid/raid0.o 00:03:48.377 LIB libspdk_bdev_iscsi.a 00:03:48.377 SO libspdk_bdev_iscsi.so.6.0 00:03:48.377 CC module/bdev/raid/raid1.o 00:03:48.377 LIB libspdk_bdev_virtio.a 00:03:48.377 CC module/bdev/nvme/vbdev_opal.o 00:03:48.377 CC module/bdev/raid/concat.o 00:03:48.377 SO libspdk_bdev_virtio.so.6.0 00:03:48.377 SYMLINK libspdk_bdev_iscsi.so 00:03:48.377 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:48.377 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:48.377 SYMLINK libspdk_bdev_virtio.so 00:03:48.635 LIB libspdk_bdev_raid.a 00:03:48.635 SO libspdk_bdev_raid.so.6.0 00:03:48.907 SYMLINK libspdk_bdev_raid.so 00:03:49.490 LIB libspdk_bdev_nvme.a 00:03:49.749 SO libspdk_bdev_nvme.so.7.1 00:03:49.749 SYMLINK libspdk_bdev_nvme.so 00:03:50.316 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.316 CC module/event/subsystems/vmd/vmd.o 00:03:50.316 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.316 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.316 CC module/event/subsystems/keyring/keyring.o 00:03:50.316 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.316 CC module/event/subsystems/sock/sock.o 00:03:50.316 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.316 CC module/event/subsystems/fsdev/fsdev.o 00:03:50.316 LIB libspdk_event_keyring.a 00:03:50.574 LIB libspdk_event_vhost_blk.a 00:03:50.574 SO libspdk_event_keyring.so.1.0 00:03:50.574 SO libspdk_event_vhost_blk.so.3.0 00:03:50.574 LIB libspdk_event_sock.a 00:03:50.574 LIB libspdk_event_vmd.a 00:03:50.574 LIB libspdk_event_scheduler.a 00:03:50.574 LIB libspdk_event_iobuf.a 00:03:50.574 SO libspdk_event_sock.so.5.0 00:03:50.574 LIB libspdk_event_fsdev.a 00:03:50.574 SO libspdk_event_scheduler.so.4.0 00:03:50.574 SO libspdk_event_vmd.so.6.0 00:03:50.574 SYMLINK libspdk_event_keyring.so 00:03:50.574 SYMLINK libspdk_event_vhost_blk.so 00:03:50.574 SO libspdk_event_iobuf.so.3.0 00:03:50.574 SO libspdk_event_fsdev.so.1.0 00:03:50.574 SYMLINK libspdk_event_sock.so 00:03:50.574 SYMLINK libspdk_event_vmd.so 00:03:50.574 SYMLINK libspdk_event_scheduler.so 00:03:50.574 SYMLINK libspdk_event_fsdev.so 00:03:50.574 SYMLINK libspdk_event_iobuf.so 00:03:50.832 CC module/event/subsystems/accel/accel.o 00:03:51.090 LIB libspdk_event_accel.a 00:03:51.090 SO libspdk_event_accel.so.6.0 00:03:51.090 SYMLINK libspdk_event_accel.so 00:03:51.348 CC module/event/subsystems/bdev/bdev.o 00:03:51.607 LIB libspdk_event_bdev.a 00:03:51.607 SO libspdk_event_bdev.so.6.0 00:03:51.607 SYMLINK libspdk_event_bdev.so 00:03:51.866 CC module/event/subsystems/ublk/ublk.o 00:03:51.866 CC module/event/subsystems/nbd/nbd.o 00:03:51.866 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.866 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.866 CC module/event/subsystems/scsi/scsi.o 00:03:52.124 LIB libspdk_event_nbd.a 00:03:52.124 LIB libspdk_event_ublk.a 00:03:52.124 SO libspdk_event_nbd.so.6.0 00:03:52.124 SO libspdk_event_ublk.so.3.0 00:03:52.124 LIB libspdk_event_scsi.a 00:03:52.124 SO libspdk_event_scsi.so.6.0 00:03:52.124 SYMLINK libspdk_event_ublk.so 00:03:52.124 SYMLINK libspdk_event_nbd.so 00:03:52.124 LIB libspdk_event_nvmf.a 00:03:52.124 SYMLINK libspdk_event_scsi.so 00:03:52.124 SO libspdk_event_nvmf.so.6.0 00:03:52.383 SYMLINK libspdk_event_nvmf.so 00:03:52.383 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:52.383 CC module/event/subsystems/iscsi/iscsi.o 00:03:52.642 LIB libspdk_event_vhost_scsi.a 00:03:52.642 SO libspdk_event_vhost_scsi.so.3.0 00:03:52.642 LIB libspdk_event_iscsi.a 00:03:52.642 SO libspdk_event_iscsi.so.6.0 00:03:52.642 SYMLINK libspdk_event_vhost_scsi.so 00:03:52.642 SYMLINK libspdk_event_iscsi.so 00:03:52.901 SO libspdk.so.6.0 00:03:52.901 SYMLINK libspdk.so 00:03:53.159 CC app/trace_record/trace_record.o 00:03:53.159 CXX app/trace/trace.o 00:03:53.159 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:53.159 CC app/nvmf_tgt/nvmf_main.o 00:03:53.159 CC app/iscsi_tgt/iscsi_tgt.o 00:03:53.159 CC examples/util/zipf/zipf.o 00:03:53.159 CC test/thread/poller_perf/poller_perf.o 00:03:53.159 CC examples/ioat/perf/perf.o 00:03:53.159 CC test/app/bdev_svc/bdev_svc.o 00:03:53.417 CC test/dma/test_dma/test_dma.o 00:03:53.417 LINK poller_perf 00:03:53.417 LINK nvmf_tgt 00:03:53.417 LINK spdk_trace_record 00:03:53.417 LINK zipf 00:03:53.417 LINK iscsi_tgt 00:03:53.417 LINK interrupt_tgt 00:03:53.417 LINK bdev_svc 00:03:53.417 LINK ioat_perf 00:03:53.676 LINK spdk_trace 00:03:53.676 CC examples/ioat/verify/verify.o 00:03:53.676 CC app/spdk_lspci/spdk_lspci.o 00:03:53.676 CC app/spdk_tgt/spdk_tgt.o 00:03:53.934 LINK spdk_lspci 00:03:53.934 CC examples/sock/hello_world/hello_sock.o 00:03:53.934 CC examples/thread/thread/thread_ex.o 00:03:53.934 CC app/spdk_nvme_perf/perf.o 00:03:53.934 CC examples/vmd/lsvmd/lsvmd.o 00:03:53.934 LINK test_dma 00:03:53.934 CC examples/idxd/perf/perf.o 00:03:53.934 LINK verify 00:03:53.934 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:53.934 LINK spdk_tgt 00:03:53.934 LINK lsvmd 00:03:54.192 TEST_HEADER include/spdk/accel.h 00:03:54.192 TEST_HEADER include/spdk/accel_module.h 00:03:54.192 TEST_HEADER include/spdk/assert.h 00:03:54.192 TEST_HEADER include/spdk/barrier.h 00:03:54.192 TEST_HEADER include/spdk/base64.h 00:03:54.192 TEST_HEADER include/spdk/bdev.h 00:03:54.192 TEST_HEADER include/spdk/bdev_module.h 00:03:54.192 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.192 LINK hello_sock 00:03:54.192 TEST_HEADER include/spdk/bit_array.h 00:03:54.192 TEST_HEADER include/spdk/bit_pool.h 00:03:54.192 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.192 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.192 TEST_HEADER include/spdk/blobfs.h 00:03:54.192 TEST_HEADER include/spdk/blob.h 00:03:54.192 TEST_HEADER include/spdk/conf.h 00:03:54.192 LINK thread 00:03:54.192 TEST_HEADER include/spdk/config.h 00:03:54.192 TEST_HEADER include/spdk/cpuset.h 00:03:54.192 TEST_HEADER include/spdk/crc16.h 00:03:54.192 TEST_HEADER include/spdk/crc32.h 00:03:54.192 TEST_HEADER include/spdk/crc64.h 00:03:54.192 TEST_HEADER include/spdk/dif.h 00:03:54.192 TEST_HEADER include/spdk/dma.h 00:03:54.192 TEST_HEADER include/spdk/endian.h 00:03:54.192 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.192 TEST_HEADER include/spdk/env.h 00:03:54.192 TEST_HEADER include/spdk/event.h 00:03:54.192 TEST_HEADER include/spdk/fd_group.h 00:03:54.192 TEST_HEADER include/spdk/fd.h 00:03:54.192 TEST_HEADER include/spdk/file.h 00:03:54.192 TEST_HEADER include/spdk/fsdev.h 00:03:54.192 TEST_HEADER include/spdk/fsdev_module.h 00:03:54.192 TEST_HEADER include/spdk/ftl.h 00:03:54.192 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.192 TEST_HEADER include/spdk/hexlify.h 00:03:54.192 TEST_HEADER include/spdk/histogram_data.h 00:03:54.192 TEST_HEADER include/spdk/idxd.h 00:03:54.192 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.192 TEST_HEADER include/spdk/init.h 00:03:54.192 TEST_HEADER include/spdk/ioat.h 00:03:54.192 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.192 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.192 TEST_HEADER include/spdk/json.h 00:03:54.192 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.192 TEST_HEADER include/spdk/keyring.h 00:03:54.192 TEST_HEADER include/spdk/keyring_module.h 00:03:54.192 TEST_HEADER include/spdk/likely.h 00:03:54.192 TEST_HEADER include/spdk/log.h 00:03:54.192 TEST_HEADER include/spdk/lvol.h 00:03:54.192 TEST_HEADER include/spdk/md5.h 00:03:54.192 TEST_HEADER include/spdk/memory.h 00:03:54.192 TEST_HEADER include/spdk/mmio.h 00:03:54.192 TEST_HEADER include/spdk/nbd.h 00:03:54.192 TEST_HEADER include/spdk/net.h 00:03:54.192 TEST_HEADER include/spdk/notify.h 00:03:54.192 TEST_HEADER include/spdk/nvme.h 00:03:54.192 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.192 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.192 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.192 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.192 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.192 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.192 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.192 TEST_HEADER include/spdk/nvmf.h 00:03:54.192 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.193 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.193 TEST_HEADER include/spdk/opal.h 00:03:54.193 TEST_HEADER include/spdk/opal_spec.h 00:03:54.193 TEST_HEADER include/spdk/pci_ids.h 00:03:54.193 TEST_HEADER include/spdk/pipe.h 00:03:54.193 TEST_HEADER include/spdk/queue.h 00:03:54.193 TEST_HEADER include/spdk/reduce.h 00:03:54.193 TEST_HEADER include/spdk/rpc.h 00:03:54.193 TEST_HEADER include/spdk/scheduler.h 00:03:54.193 TEST_HEADER include/spdk/scsi.h 00:03:54.193 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.193 TEST_HEADER include/spdk/sock.h 00:03:54.193 TEST_HEADER include/spdk/stdinc.h 00:03:54.193 TEST_HEADER include/spdk/string.h 00:03:54.193 LINK idxd_perf 00:03:54.193 TEST_HEADER include/spdk/thread.h 00:03:54.193 TEST_HEADER include/spdk/trace.h 00:03:54.193 TEST_HEADER include/spdk/trace_parser.h 00:03:54.193 TEST_HEADER include/spdk/tree.h 00:03:54.193 TEST_HEADER include/spdk/ublk.h 00:03:54.193 CC test/event/event_perf/event_perf.o 00:03:54.193 TEST_HEADER include/spdk/util.h 00:03:54.193 TEST_HEADER include/spdk/uuid.h 00:03:54.193 TEST_HEADER include/spdk/version.h 00:03:54.193 CC examples/vmd/led/led.o 00:03:54.193 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.193 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.193 TEST_HEADER include/spdk/vhost.h 00:03:54.193 TEST_HEADER include/spdk/vmd.h 00:03:54.193 TEST_HEADER include/spdk/xor.h 00:03:54.193 TEST_HEADER include/spdk/zipf.h 00:03:54.193 CXX test/cpp_headers/accel.o 00:03:54.193 LINK nvme_fuzz 00:03:54.193 CC test/env/vtophys/vtophys.o 00:03:54.454 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.454 CXX test/cpp_headers/accel_module.o 00:03:54.454 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:54.454 CXX test/cpp_headers/assert.o 00:03:54.454 LINK event_perf 00:03:54.454 LINK led 00:03:54.455 LINK vtophys 00:03:54.455 LINK env_dpdk_post_init 00:03:54.711 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:54.711 CXX test/cpp_headers/barrier.o 00:03:54.711 CC test/app/histogram_perf/histogram_perf.o 00:03:54.711 CC test/event/reactor/reactor.o 00:03:54.711 CC examples/nvme/reconnect/reconnect.o 00:03:54.711 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.711 CC examples/nvme/hello_world/hello_world.o 00:03:54.711 LINK spdk_nvme_perf 00:03:54.711 CXX test/cpp_headers/base64.o 00:03:54.711 LINK histogram_perf 00:03:54.969 LINK reactor 00:03:54.969 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:54.969 CXX test/cpp_headers/bdev.o 00:03:54.969 LINK hello_world 00:03:54.969 LINK mem_callbacks 00:03:54.969 CC app/spdk_nvme_identify/identify.o 00:03:55.227 LINK reconnect 00:03:55.227 CC test/event/reactor_perf/reactor_perf.o 00:03:55.227 CC examples/accel/perf/accel_perf.o 00:03:55.227 CXX test/cpp_headers/bdev_module.o 00:03:55.227 CC test/env/memory/memory_ut.o 00:03:55.227 LINK nvme_manage 00:03:55.227 LINK hello_fsdev 00:03:55.227 CXX test/cpp_headers/bdev_zone.o 00:03:55.227 LINK reactor_perf 00:03:55.485 CC test/nvme/aer/aer.o 00:03:55.485 CXX test/cpp_headers/bit_array.o 00:03:55.485 CC examples/nvme/arbitration/arbitration.o 00:03:55.485 CC examples/nvme/hotplug/hotplug.o 00:03:55.485 CC test/event/app_repeat/app_repeat.o 00:03:55.743 CXX test/cpp_headers/bit_pool.o 00:03:55.743 CC examples/blob/hello_world/hello_blob.o 00:03:55.743 LINK accel_perf 00:03:55.743 LINK aer 00:03:55.743 LINK app_repeat 00:03:56.001 LINK hotplug 00:03:56.001 LINK spdk_nvme_identify 00:03:56.001 LINK arbitration 00:03:56.001 CXX test/cpp_headers/blob_bdev.o 00:03:56.001 LINK hello_blob 00:03:56.001 CXX test/cpp_headers/blobfs_bdev.o 00:03:56.001 CC test/nvme/reset/reset.o 00:03:56.260 CC test/event/scheduler/scheduler.o 00:03:56.260 CC app/spdk_nvme_discover/discovery_aer.o 00:03:56.260 CXX test/cpp_headers/blobfs.o 00:03:56.260 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:56.260 CC examples/bdev/hello_world/hello_bdev.o 00:03:56.260 CC examples/blob/cli/blobcli.o 00:03:56.260 LINK reset 00:03:56.260 CC examples/bdev/bdevperf/bdevperf.o 00:03:56.260 LINK iscsi_fuzz 00:03:56.518 CXX test/cpp_headers/blob.o 00:03:56.518 LINK scheduler 00:03:56.518 LINK cmb_copy 00:03:56.518 LINK memory_ut 00:03:56.518 LINK hello_bdev 00:03:56.518 LINK spdk_nvme_discover 00:03:56.518 CXX test/cpp_headers/conf.o 00:03:56.518 CC test/nvme/sgl/sgl.o 00:03:56.776 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:56.776 CC examples/nvme/abort/abort.o 00:03:56.776 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:56.776 CXX test/cpp_headers/config.o 00:03:56.776 CXX test/cpp_headers/cpuset.o 00:03:56.776 CC test/env/pci/pci_ut.o 00:03:56.776 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:56.776 CC app/spdk_top/spdk_top.o 00:03:57.033 LINK blobcli 00:03:57.033 LINK sgl 00:03:57.033 LINK pmr_persistence 00:03:57.033 CC test/rpc_client/rpc_client_test.o 00:03:57.033 CXX test/cpp_headers/crc16.o 00:03:57.033 LINK abort 00:03:57.033 CXX test/cpp_headers/crc32.o 00:03:57.291 LINK rpc_client_test 00:03:57.291 CC test/nvme/e2edp/nvme_dp.o 00:03:57.291 LINK bdevperf 00:03:57.291 LINK pci_ut 00:03:57.291 CC app/vhost/vhost.o 00:03:57.291 LINK vhost_fuzz 00:03:57.291 CXX test/cpp_headers/crc64.o 00:03:57.291 CC test/nvme/overhead/overhead.o 00:03:57.550 LINK vhost 00:03:57.550 CC test/accel/dif/dif.o 00:03:57.550 LINK nvme_dp 00:03:57.550 CXX test/cpp_headers/dif.o 00:03:57.550 CC test/blobfs/mkfs/mkfs.o 00:03:57.550 CC test/app/jsoncat/jsoncat.o 00:03:57.550 CC test/nvme/err_injection/err_injection.o 00:03:57.808 CXX test/cpp_headers/dma.o 00:03:57.808 LINK overhead 00:03:57.808 CC examples/nvmf/nvmf/nvmf.o 00:03:57.808 LINK jsoncat 00:03:57.808 LINK mkfs 00:03:57.808 LINK spdk_top 00:03:57.808 CC app/spdk_dd/spdk_dd.o 00:03:57.808 LINK err_injection 00:03:57.808 CXX test/cpp_headers/endian.o 00:03:58.066 CC test/nvme/startup/startup.o 00:03:58.066 CC app/fio/nvme/fio_plugin.o 00:03:58.066 CC test/app/stub/stub.o 00:03:58.066 LINK nvmf 00:03:58.066 CXX test/cpp_headers/env_dpdk.o 00:03:58.066 CC test/nvme/reserve/reserve.o 00:03:58.324 LINK startup 00:03:58.324 LINK dif 00:03:58.324 CC app/fio/bdev/fio_plugin.o 00:03:58.324 LINK stub 00:03:58.324 LINK spdk_dd 00:03:58.324 CXX test/cpp_headers/env.o 00:03:58.324 CXX test/cpp_headers/event.o 00:03:58.582 CC test/lvol/esnap/esnap.o 00:03:58.582 LINK reserve 00:03:58.582 CC test/nvme/simple_copy/simple_copy.o 00:03:58.582 CC test/nvme/connect_stress/connect_stress.o 00:03:58.582 CXX test/cpp_headers/fd_group.o 00:03:58.582 CXX test/cpp_headers/fd.o 00:03:58.582 LINK spdk_nvme 00:03:58.582 CC test/nvme/boot_partition/boot_partition.o 00:03:58.842 CC test/bdev/bdevio/bdevio.o 00:03:58.842 LINK simple_copy 00:03:58.842 CC test/nvme/compliance/nvme_compliance.o 00:03:58.842 CXX test/cpp_headers/file.o 00:03:58.842 LINK connect_stress 00:03:58.842 CC test/nvme/fused_ordering/fused_ordering.o 00:03:58.842 LINK spdk_bdev 00:03:58.842 LINK boot_partition 00:03:58.842 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:58.842 CXX test/cpp_headers/fsdev.o 00:03:59.103 CXX test/cpp_headers/fsdev_module.o 00:03:59.103 CC test/nvme/fdp/fdp.o 00:03:59.103 CC test/nvme/cuse/cuse.o 00:03:59.103 CXX test/cpp_headers/ftl.o 00:03:59.103 LINK fused_ordering 00:03:59.103 LINK nvme_compliance 00:03:59.103 LINK doorbell_aers 00:03:59.103 LINK bdevio 00:03:59.103 CXX test/cpp_headers/gpt_spec.o 00:03:59.103 CXX test/cpp_headers/hexlify.o 00:03:59.361 CXX test/cpp_headers/histogram_data.o 00:03:59.361 CXX test/cpp_headers/idxd.o 00:03:59.361 CXX test/cpp_headers/idxd_spec.o 00:03:59.361 CXX test/cpp_headers/init.o 00:03:59.361 CXX test/cpp_headers/ioat.o 00:03:59.361 CXX test/cpp_headers/ioat_spec.o 00:03:59.361 LINK fdp 00:03:59.361 CXX test/cpp_headers/iscsi_spec.o 00:03:59.361 CXX test/cpp_headers/json.o 00:03:59.361 CXX test/cpp_headers/jsonrpc.o 00:03:59.361 CXX test/cpp_headers/keyring.o 00:03:59.619 CXX test/cpp_headers/keyring_module.o 00:03:59.619 CXX test/cpp_headers/likely.o 00:03:59.619 CXX test/cpp_headers/log.o 00:03:59.619 CXX test/cpp_headers/lvol.o 00:03:59.619 CXX test/cpp_headers/md5.o 00:03:59.619 CXX test/cpp_headers/memory.o 00:03:59.619 CXX test/cpp_headers/mmio.o 00:03:59.619 CXX test/cpp_headers/nbd.o 00:03:59.619 CXX test/cpp_headers/net.o 00:03:59.619 CXX test/cpp_headers/notify.o 00:03:59.619 CXX test/cpp_headers/nvme.o 00:03:59.878 CXX test/cpp_headers/nvme_intel.o 00:03:59.878 CXX test/cpp_headers/nvme_ocssd.o 00:03:59.878 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:59.878 CXX test/cpp_headers/nvme_spec.o 00:03:59.878 CXX test/cpp_headers/nvme_zns.o 00:03:59.878 CXX test/cpp_headers/nvmf_cmd.o 00:03:59.878 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:59.878 CXX test/cpp_headers/nvmf.o 00:03:59.878 CXX test/cpp_headers/nvmf_spec.o 00:03:59.878 CXX test/cpp_headers/nvmf_transport.o 00:03:59.878 CXX test/cpp_headers/opal.o 00:03:59.878 CXX test/cpp_headers/opal_spec.o 00:04:00.136 CXX test/cpp_headers/pci_ids.o 00:04:00.136 CXX test/cpp_headers/pipe.o 00:04:00.136 CXX test/cpp_headers/queue.o 00:04:00.136 CXX test/cpp_headers/reduce.o 00:04:00.136 CXX test/cpp_headers/rpc.o 00:04:00.136 CXX test/cpp_headers/scheduler.o 00:04:00.136 CXX test/cpp_headers/scsi.o 00:04:00.136 CXX test/cpp_headers/scsi_spec.o 00:04:00.136 CXX test/cpp_headers/sock.o 00:04:00.136 CXX test/cpp_headers/stdinc.o 00:04:00.394 CXX test/cpp_headers/string.o 00:04:00.394 CXX test/cpp_headers/thread.o 00:04:00.394 CXX test/cpp_headers/trace.o 00:04:00.394 CXX test/cpp_headers/trace_parser.o 00:04:00.394 CXX test/cpp_headers/tree.o 00:04:00.394 CXX test/cpp_headers/ublk.o 00:04:00.394 CXX test/cpp_headers/util.o 00:04:00.394 CXX test/cpp_headers/uuid.o 00:04:00.394 CXX test/cpp_headers/version.o 00:04:00.394 LINK cuse 00:04:00.394 CXX test/cpp_headers/vfio_user_pci.o 00:04:00.394 CXX test/cpp_headers/vfio_user_spec.o 00:04:00.394 CXX test/cpp_headers/vhost.o 00:04:00.394 CXX test/cpp_headers/vmd.o 00:04:00.653 CXX test/cpp_headers/xor.o 00:04:00.653 CXX test/cpp_headers/zipf.o 00:04:03.938 LINK esnap 00:04:04.197 00:04:04.197 real 1m44.505s 00:04:04.197 user 9m37.147s 00:04:04.197 sys 1m42.417s 00:04:04.197 21:31:04 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:04.197 21:31:04 make -- common/autotest_common.sh@10 -- $ set +x 00:04:04.197 ************************************ 00:04:04.197 END TEST make 00:04:04.197 ************************************ 00:04:04.197 21:31:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:04.197 21:31:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:04.197 21:31:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:04.197 21:31:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.197 21:31:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:04.197 21:31:04 -- pm/common@44 -- $ pid=5308 00:04:04.197 21:31:04 -- pm/common@50 -- $ kill -TERM 5308 00:04:04.197 21:31:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.197 21:31:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:04.197 21:31:04 -- pm/common@44 -- $ pid=5309 00:04:04.197 21:31:04 -- pm/common@50 -- $ kill -TERM 5309 00:04:04.197 21:31:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:04.197 21:31:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:04.197 21:31:04 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:04.197 21:31:04 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:04.197 21:31:04 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:04.458 21:31:05 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:04.458 21:31:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.458 21:31:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.458 21:31:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.458 21:31:05 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.458 21:31:05 -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.458 21:31:05 -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.458 21:31:05 -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.458 21:31:05 -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.458 21:31:05 -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.458 21:31:05 -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.458 21:31:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.458 21:31:05 -- scripts/common.sh@344 -- # case "$op" in 00:04:04.458 21:31:05 -- scripts/common.sh@345 -- # : 1 00:04:04.458 21:31:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.458 21:31:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.458 21:31:05 -- scripts/common.sh@365 -- # decimal 1 00:04:04.458 21:31:05 -- scripts/common.sh@353 -- # local d=1 00:04:04.458 21:31:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.458 21:31:05 -- scripts/common.sh@355 -- # echo 1 00:04:04.458 21:31:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.458 21:31:05 -- scripts/common.sh@366 -- # decimal 2 00:04:04.458 21:31:05 -- scripts/common.sh@353 -- # local d=2 00:04:04.458 21:31:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.458 21:31:05 -- scripts/common.sh@355 -- # echo 2 00:04:04.458 21:31:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.458 21:31:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.458 21:31:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.458 21:31:05 -- scripts/common.sh@368 -- # return 0 00:04:04.458 21:31:05 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.458 21:31:05 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:04.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.458 --rc genhtml_branch_coverage=1 00:04:04.458 --rc genhtml_function_coverage=1 00:04:04.458 --rc genhtml_legend=1 00:04:04.458 --rc geninfo_all_blocks=1 00:04:04.458 --rc geninfo_unexecuted_blocks=1 00:04:04.458 00:04:04.458 ' 00:04:04.458 21:31:05 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:04.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.458 --rc genhtml_branch_coverage=1 00:04:04.458 --rc genhtml_function_coverage=1 00:04:04.458 --rc genhtml_legend=1 00:04:04.458 --rc geninfo_all_blocks=1 00:04:04.458 --rc geninfo_unexecuted_blocks=1 00:04:04.458 00:04:04.458 ' 00:04:04.458 21:31:05 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:04.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.458 --rc genhtml_branch_coverage=1 00:04:04.458 --rc genhtml_function_coverage=1 00:04:04.458 --rc genhtml_legend=1 00:04:04.458 --rc geninfo_all_blocks=1 00:04:04.458 --rc geninfo_unexecuted_blocks=1 00:04:04.458 00:04:04.458 ' 00:04:04.458 21:31:05 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:04.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.458 --rc genhtml_branch_coverage=1 00:04:04.458 --rc genhtml_function_coverage=1 00:04:04.458 --rc genhtml_legend=1 00:04:04.458 --rc geninfo_all_blocks=1 00:04:04.458 --rc geninfo_unexecuted_blocks=1 00:04:04.458 00:04:04.458 ' 00:04:04.458 21:31:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:04.458 21:31:05 -- nvmf/common.sh@7 -- # uname -s 00:04:04.459 21:31:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.459 21:31:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.459 21:31:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.459 21:31:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.459 21:31:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.459 21:31:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.459 21:31:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.459 21:31:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.459 21:31:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.459 21:31:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.459 21:31:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:04:04.459 21:31:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:04:04.459 21:31:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.459 21:31:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.459 21:31:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:04.459 21:31:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.459 21:31:05 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.459 21:31:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.459 21:31:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.459 21:31:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.459 21:31:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.459 21:31:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.459 21:31:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.459 21:31:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.459 21:31:05 -- paths/export.sh@5 -- # export PATH 00:04:04.459 21:31:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.459 21:31:05 -- nvmf/common.sh@51 -- # : 0 00:04:04.459 21:31:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:04.459 21:31:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:04.459 21:31:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.459 21:31:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.459 21:31:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.459 21:31:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:04.459 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:04.459 21:31:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:04.459 21:31:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:04.459 21:31:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:04.459 21:31:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:04.459 21:31:05 -- spdk/autotest.sh@32 -- # uname -s 00:04:04.459 21:31:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:04.459 21:31:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:04.459 21:31:05 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.459 21:31:05 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:04.459 21:31:05 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.459 21:31:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:04.459 21:31:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:04.459 21:31:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:04.459 21:31:05 -- spdk/autotest.sh@48 -- # udevadm_pid=54558 00:04:04.459 21:31:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:04.459 21:31:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:04.459 21:31:05 -- pm/common@17 -- # local monitor 00:04:04.459 21:31:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.459 21:31:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.459 21:31:05 -- pm/common@25 -- # sleep 1 00:04:04.459 21:31:05 -- pm/common@21 -- # date +%s 00:04:04.459 21:31:05 -- pm/common@21 -- # date +%s 00:04:04.459 21:31:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733866265 00:04:04.459 21:31:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733866265 00:04:04.459 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733866265_collect-cpu-load.pm.log 00:04:04.459 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733866265_collect-vmstat.pm.log 00:04:05.395 21:31:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:05.395 21:31:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:05.395 21:31:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.395 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.395 21:31:06 -- spdk/autotest.sh@59 -- # create_test_list 00:04:05.395 21:31:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:05.395 21:31:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.653 21:31:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:05.653 21:31:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:05.653 21:31:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:05.653 21:31:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:05.653 21:31:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:05.653 21:31:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:05.653 21:31:06 -- common/autotest_common.sh@1457 -- # uname 00:04:05.653 21:31:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:05.653 21:31:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:05.654 21:31:06 -- common/autotest_common.sh@1477 -- # uname 00:04:05.654 21:31:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:05.654 21:31:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:05.654 21:31:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:05.654 lcov: LCOV version 1.15 00:04:05.654 21:31:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:23.781 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:23.781 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:41.865 21:31:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:41.865 21:31:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.865 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:41.865 21:31:39 -- spdk/autotest.sh@78 -- # rm -f 00:04:41.865 21:31:39 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.865 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:41.865 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:41.865 21:31:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:41.865 21:31:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:41.865 21:31:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:41.865 21:31:40 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:41.865 21:31:40 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:41.865 21:31:40 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:41.865 21:31:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:41.865 21:31:40 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:41.865 21:31:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.865 21:31:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:41.865 21:31:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:41.865 21:31:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.865 21:31:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.865 21:31:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.865 21:31:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:04:41.865 21:31:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:04:41.865 21:31:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:41.865 21:31:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.865 21:31:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.865 21:31:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:04:41.865 21:31:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:04:41.865 21:31:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:41.865 21:31:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.865 21:31:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:41.865 21:31:40 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:41.865 21:31:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.865 21:31:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:41.866 21:31:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:41.866 21:31:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:41.866 21:31:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.866 21:31:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:41.866 21:31:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.866 21:31:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.866 21:31:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:41.866 21:31:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:41.866 21:31:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:41.866 No valid GPT data, bailing 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # pt= 00:04:41.866 21:31:40 -- scripts/common.sh@395 -- # return 1 00:04:41.866 21:31:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:41.866 1+0 records in 00:04:41.866 1+0 records out 00:04:41.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422483 s, 248 MB/s 00:04:41.866 21:31:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.866 21:31:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.866 21:31:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:04:41.866 21:31:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:04:41.866 21:31:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:41.866 No valid GPT data, bailing 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # pt= 00:04:41.866 21:31:40 -- scripts/common.sh@395 -- # return 1 00:04:41.866 21:31:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:41.866 1+0 records in 00:04:41.866 1+0 records out 00:04:41.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00407641 s, 257 MB/s 00:04:41.866 21:31:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.866 21:31:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.866 21:31:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:04:41.866 21:31:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:04:41.866 21:31:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:41.866 No valid GPT data, bailing 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # pt= 00:04:41.866 21:31:40 -- scripts/common.sh@395 -- # return 1 00:04:41.866 21:31:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:41.866 1+0 records in 00:04:41.866 1+0 records out 00:04:41.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436915 s, 240 MB/s 00:04:41.866 21:31:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.866 21:31:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.866 21:31:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:41.866 21:31:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:41.866 21:31:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:41.866 No valid GPT data, bailing 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:41.866 21:31:40 -- scripts/common.sh@394 -- # pt= 00:04:41.866 21:31:40 -- scripts/common.sh@395 -- # return 1 00:04:41.866 21:31:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:41.866 1+0 records in 00:04:41.866 1+0 records out 00:04:41.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442895 s, 237 MB/s 00:04:41.866 21:31:40 -- spdk/autotest.sh@105 -- # sync 00:04:41.866 21:31:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:41.866 21:31:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:41.866 21:31:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:42.125 21:31:42 -- spdk/autotest.sh@111 -- # uname -s 00:04:42.125 21:31:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:42.125 21:31:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:42.125 21:31:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.693 Hugepages 00:04:42.693 node hugesize free / total 00:04:42.693 node0 1048576kB 0 / 0 00:04:42.693 node0 2048kB 0 / 0 00:04:42.693 00:04:42.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.951 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:42.951 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:42.951 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:42.951 21:31:43 -- spdk/autotest.sh@117 -- # uname -s 00:04:42.951 21:31:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:42.951 21:31:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:42.951 21:31:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.888 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.888 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.888 21:31:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:44.824 21:31:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:44.824 21:31:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:44.824 21:31:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.824 21:31:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:44.824 21:31:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:44.824 21:31:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:44.824 21:31:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.824 21:31:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:44.824 21:31:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:44.824 21:31:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:44.824 21:31:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:44.824 21:31:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.390 Waiting for block devices as requested 00:04:45.390 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.390 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.649 21:31:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:45.649 21:31:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:45.649 21:31:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:45.649 21:31:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:45.649 21:31:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:45.649 21:31:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1543 -- # continue 00:04:45.649 21:31:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:45.649 21:31:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.649 21:31:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:45.649 21:31:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:45.649 21:31:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:45.649 21:31:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:45.649 21:31:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:45.649 21:31:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:45.649 21:31:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:45.649 21:31:46 -- common/autotest_common.sh@1543 -- # continue 00:04:45.649 21:31:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:45.649 21:31:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.649 21:31:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.649 21:31:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:45.649 21:31:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.649 21:31:46 -- common/autotest_common.sh@10 -- # set +x 00:04:45.649 21:31:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.475 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.475 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.475 21:31:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:46.475 21:31:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.475 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:46.475 21:31:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:46.475 21:31:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:46.475 21:31:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.475 21:31:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:46.475 21:31:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:46.475 21:31:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:46.475 21:31:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:46.475 21:31:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:46.475 21:31:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:46.475 21:31:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:46.475 21:31:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.475 21:31:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.475 21:31:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:46.475 21:31:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:46.475 21:31:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:46.475 21:31:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:46.475 21:31:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:46.475 21:31:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:46.475 21:31:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.475 21:31:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:46.475 21:31:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:46.475 21:31:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:46.475 21:31:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:46.475 21:31:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:46.475 21:31:47 -- common/autotest_common.sh@1572 -- # return 0 00:04:46.475 21:31:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:46.475 21:31:47 -- common/autotest_common.sh@1580 -- # return 0 00:04:46.475 21:31:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:46.475 21:31:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:46.475 21:31:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:46.475 21:31:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:46.475 21:31:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:46.475 21:31:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.475 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:46.475 21:31:47 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:46.475 21:31:47 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:46.475 21:31:47 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:46.475 21:31:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.475 21:31:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.475 21:31:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.475 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:46.734 ************************************ 00:04:46.734 START TEST env 00:04:46.734 ************************************ 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.734 * Looking for test storage... 00:04:46.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.734 21:31:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.734 21:31:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.734 21:31:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.734 21:31:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.734 21:31:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.734 21:31:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.734 21:31:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.734 21:31:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.734 21:31:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.734 21:31:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.734 21:31:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.734 21:31:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:46.734 21:31:47 env -- scripts/common.sh@345 -- # : 1 00:04:46.734 21:31:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.734 21:31:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.734 21:31:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:46.734 21:31:47 env -- scripts/common.sh@353 -- # local d=1 00:04:46.734 21:31:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.734 21:31:47 env -- scripts/common.sh@355 -- # echo 1 00:04:46.734 21:31:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.734 21:31:47 env -- scripts/common.sh@366 -- # decimal 2 00:04:46.734 21:31:47 env -- scripts/common.sh@353 -- # local d=2 00:04:46.734 21:31:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.734 21:31:47 env -- scripts/common.sh@355 -- # echo 2 00:04:46.734 21:31:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.734 21:31:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.734 21:31:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.734 21:31:47 env -- scripts/common.sh@368 -- # return 0 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.734 --rc genhtml_branch_coverage=1 00:04:46.734 --rc genhtml_function_coverage=1 00:04:46.734 --rc genhtml_legend=1 00:04:46.734 --rc geninfo_all_blocks=1 00:04:46.734 --rc geninfo_unexecuted_blocks=1 00:04:46.734 00:04:46.734 ' 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.734 --rc genhtml_branch_coverage=1 00:04:46.734 --rc genhtml_function_coverage=1 00:04:46.734 --rc genhtml_legend=1 00:04:46.734 --rc geninfo_all_blocks=1 00:04:46.734 --rc geninfo_unexecuted_blocks=1 00:04:46.734 00:04:46.734 ' 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.734 --rc genhtml_branch_coverage=1 00:04:46.734 --rc genhtml_function_coverage=1 00:04:46.734 --rc genhtml_legend=1 00:04:46.734 --rc geninfo_all_blocks=1 00:04:46.734 --rc geninfo_unexecuted_blocks=1 00:04:46.734 00:04:46.734 ' 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.734 --rc genhtml_branch_coverage=1 00:04:46.734 --rc genhtml_function_coverage=1 00:04:46.734 --rc genhtml_legend=1 00:04:46.734 --rc geninfo_all_blocks=1 00:04:46.734 --rc geninfo_unexecuted_blocks=1 00:04:46.734 00:04:46.734 ' 00:04:46.734 21:31:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.734 21:31:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.734 21:31:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.734 ************************************ 00:04:46.734 START TEST env_memory 00:04:46.734 ************************************ 00:04:46.734 21:31:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.734 00:04:46.734 00:04:46.734 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.734 http://cunit.sourceforge.net/ 00:04:46.734 00:04:46.734 00:04:46.734 Suite: memory 00:04:46.993 Test: alloc and free memory map ...[2024-12-10 21:31:47.528867] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.993 passed 00:04:46.993 Test: mem map translation ...[2024-12-10 21:31:47.563303] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.993 [2024-12-10 21:31:47.563357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.993 [2024-12-10 21:31:47.563428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.993 [2024-12-10 21:31:47.563455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.993 passed 00:04:46.993 Test: mem map registration ...[2024-12-10 21:31:47.630536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:46.993 [2024-12-10 21:31:47.630576] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:46.993 passed 00:04:46.993 Test: mem map adjacent registrations ...passed 00:04:46.993 00:04:46.993 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.993 suites 1 1 n/a 0 0 00:04:46.993 tests 4 4 4 0 0 00:04:46.993 asserts 152 152 152 0 n/a 00:04:46.993 00:04:46.993 Elapsed time = 0.223 seconds 00:04:46.993 00:04:46.993 real 0m0.243s 00:04:46.993 user 0m0.224s 00:04:46.993 sys 0m0.014s 00:04:46.993 21:31:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.993 21:31:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:46.993 ************************************ 00:04:46.993 END TEST env_memory 00:04:46.993 ************************************ 00:04:46.993 21:31:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.993 21:31:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.993 21:31:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.993 21:31:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.993 ************************************ 00:04:46.993 START TEST env_vtophys 00:04:46.993 ************************************ 00:04:46.993 21:31:47 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.253 EAL: lib.eal log level changed from notice to debug 00:04:47.253 EAL: Detected lcore 0 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 1 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 2 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 3 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 4 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 5 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 6 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 7 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 8 as core 0 on socket 0 00:04:47.253 EAL: Detected lcore 9 as core 0 on socket 0 00:04:47.253 EAL: Maximum logical cores by configuration: 128 00:04:47.253 EAL: Detected CPU lcores: 10 00:04:47.253 EAL: Detected NUMA nodes: 1 00:04:47.253 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:47.253 EAL: Detected shared linkage of DPDK 00:04:47.253 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.253 EAL: Selected IOVA mode 'PA' 00:04:47.253 EAL: Probing VFIO support... 00:04:47.253 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.253 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:47.253 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.253 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.253 EAL: Setting up physically contiguous memory... 00:04:47.253 EAL: Setting maximum number of open files to 524288 00:04:47.253 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.253 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.253 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.253 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.253 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.253 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.253 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.253 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.253 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.253 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.253 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.253 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.253 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.253 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.253 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.253 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.253 EAL: Hugepages will be freed exactly as allocated. 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: TSC frequency is ~2200000 KHz 00:04:47.253 EAL: Main lcore 0 is ready (tid=7f2961a44a00;cpuset=[0]) 00:04:47.253 EAL: Trying to obtain current memory policy. 00:04:47.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.253 EAL: Restoring previous memory policy: 0 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.253 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.253 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.253 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.253 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.253 00:04:47.253 00:04:47.253 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.253 http://cunit.sourceforge.net/ 00:04:47.253 00:04:47.253 00:04:47.253 Suite: components_suite 00:04:47.253 Test: vtophys_malloc_test ...passed 00:04:47.253 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.253 EAL: Restoring previous memory policy: 4 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.253 EAL: Trying to obtain current memory policy. 00:04:47.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.253 EAL: Restoring previous memory policy: 4 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.253 EAL: Trying to obtain current memory policy. 00:04:47.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.253 EAL: Restoring previous memory policy: 4 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.253 EAL: Trying to obtain current memory policy. 00:04:47.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.253 EAL: Restoring previous memory policy: 4 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.253 EAL: Trying to obtain current memory policy. 00:04:47.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.253 EAL: Restoring previous memory policy: 4 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.253 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.253 EAL: request: mp_malloc_sync 00:04:47.253 EAL: No shared files mode enabled, IPC is disabled 00:04:47.253 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.253 EAL: Trying to obtain current memory policy. 00:04:47.253 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.513 EAL: Restoring previous memory policy: 4 00:04:47.513 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.513 EAL: request: mp_malloc_sync 00:04:47.513 EAL: No shared files mode enabled, IPC is disabled 00:04:47.513 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.513 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.513 EAL: request: mp_malloc_sync 00:04:47.513 EAL: No shared files mode enabled, IPC is disabled 00:04:47.513 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.513 EAL: Trying to obtain current memory policy. 00:04:47.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.513 EAL: Restoring previous memory policy: 4 00:04:47.513 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.513 EAL: request: mp_malloc_sync 00:04:47.513 EAL: No shared files mode enabled, IPC is disabled 00:04:47.513 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.513 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.513 EAL: request: mp_malloc_sync 00:04:47.513 EAL: No shared files mode enabled, IPC is disabled 00:04:47.513 EAL: Heap on socket 0 was shrunk by 130MB 00:04:47.513 EAL: Trying to obtain current memory policy. 00:04:47.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.513 EAL: Restoring previous memory policy: 4 00:04:47.513 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.513 EAL: request: mp_malloc_sync 00:04:47.513 EAL: No shared files mode enabled, IPC is disabled 00:04:47.513 EAL: Heap on socket 0 was expanded by 258MB 00:04:47.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.772 EAL: request: mp_malloc_sync 00:04:47.772 EAL: No shared files mode enabled, IPC is disabled 00:04:47.772 EAL: Heap on socket 0 was shrunk by 258MB 00:04:47.772 EAL: Trying to obtain current memory policy. 00:04:47.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.772 EAL: Restoring previous memory policy: 4 00:04:47.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.772 EAL: request: mp_malloc_sync 00:04:47.772 EAL: No shared files mode enabled, IPC is disabled 00:04:47.772 EAL: Heap on socket 0 was expanded by 514MB 00:04:47.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.772 EAL: request: mp_malloc_sync 00:04:47.772 EAL: No shared files mode enabled, IPC is disabled 00:04:47.772 EAL: Heap on socket 0 was shrunk by 514MB 00:04:47.772 EAL: Trying to obtain current memory policy. 00:04:47.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.030 EAL: Restoring previous memory policy: 4 00:04:48.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.030 EAL: request: mp_malloc_sync 00:04:48.030 EAL: No shared files mode enabled, IPC is disabled 00:04:48.030 EAL: Heap on socket 0 was expanded by 1026MB 00:04:48.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.289 passed 00:04:48.289 00:04:48.289 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.289 suites 1 1 n/a 0 0 00:04:48.289 tests 2 2 2 0 0 00:04:48.289 asserts 5400 5400 5400 0 n/a 00:04:48.289 00:04:48.289 Elapsed time = 0.902 seconds 00:04:48.289 EAL: request: mp_malloc_sync 00:04:48.289 EAL: No shared files mode enabled, IPC is disabled 00:04:48.289 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:48.289 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.289 EAL: request: mp_malloc_sync 00:04:48.289 EAL: No shared files mode enabled, IPC is disabled 00:04:48.289 EAL: Heap on socket 0 was shrunk by 2MB 00:04:48.289 EAL: No shared files mode enabled, IPC is disabled 00:04:48.289 EAL: No shared files mode enabled, IPC is disabled 00:04:48.289 EAL: No shared files mode enabled, IPC is disabled 00:04:48.289 00:04:48.289 real 0m1.124s 00:04:48.289 user 0m0.534s 00:04:48.289 sys 0m0.446s 00:04:48.289 21:31:48 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.289 21:31:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:48.289 ************************************ 00:04:48.289 END TEST env_vtophys 00:04:48.289 ************************************ 00:04:48.289 21:31:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.289 21:31:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.289 21:31:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.289 21:31:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.289 ************************************ 00:04:48.289 START TEST env_pci 00:04:48.289 ************************************ 00:04:48.289 21:31:48 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.289 00:04:48.289 00:04:48.289 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.289 http://cunit.sourceforge.net/ 00:04:48.289 00:04:48.289 00:04:48.289 Suite: pci 00:04:48.289 Test: pci_hook ...[2024-12-10 21:31:48.957715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56808 has claimed it 00:04:48.289 passed 00:04:48.289 00:04:48.289 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.289 suites 1 1 n/a 0 0 00:04:48.289 tests 1 1 1 0 0 00:04:48.289 asserts 25 25 25 0 n/a 00:04:48.289 00:04:48.289 Elapsed time = 0.002 seconds 00:04:48.289 EAL: Cannot find device (10000:00:01.0) 00:04:48.289 EAL: Failed to attach device on primary process 00:04:48.289 00:04:48.289 real 0m0.019s 00:04:48.289 user 0m0.009s 00:04:48.289 sys 0m0.010s 00:04:48.289 21:31:48 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.289 21:31:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:48.289 ************************************ 00:04:48.289 END TEST env_pci 00:04:48.289 ************************************ 00:04:48.289 21:31:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:48.289 21:31:48 env -- env/env.sh@15 -- # uname 00:04:48.289 21:31:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:48.289 21:31:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:48.289 21:31:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.289 21:31:48 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:48.289 21:31:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.289 21:31:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.289 ************************************ 00:04:48.289 START TEST env_dpdk_post_init 00:04:48.289 ************************************ 00:04:48.289 21:31:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.289 EAL: Detected CPU lcores: 10 00:04:48.289 EAL: Detected NUMA nodes: 1 00:04:48.289 EAL: Detected shared linkage of DPDK 00:04:48.289 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.289 EAL: Selected IOVA mode 'PA' 00:04:48.548 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.548 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:48.548 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:48.548 Starting DPDK initialization... 00:04:48.548 Starting SPDK post initialization... 00:04:48.548 SPDK NVMe probe 00:04:48.548 Attaching to 0000:00:10.0 00:04:48.548 Attaching to 0000:00:11.0 00:04:48.548 Attached to 0000:00:10.0 00:04:48.548 Attached to 0000:00:11.0 00:04:48.548 Cleaning up... 00:04:48.548 00:04:48.548 real 0m0.181s 00:04:48.548 user 0m0.051s 00:04:48.548 sys 0m0.030s 00:04:48.548 21:31:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.548 ************************************ 00:04:48.548 END TEST env_dpdk_post_init 00:04:48.548 ************************************ 00:04:48.548 21:31:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.548 21:31:49 env -- env/env.sh@26 -- # uname 00:04:48.549 21:31:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.549 21:31:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.549 21:31:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.549 21:31:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.549 21:31:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.549 ************************************ 00:04:48.549 START TEST env_mem_callbacks 00:04:48.549 ************************************ 00:04:48.549 21:31:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.549 EAL: Detected CPU lcores: 10 00:04:48.549 EAL: Detected NUMA nodes: 1 00:04:48.549 EAL: Detected shared linkage of DPDK 00:04:48.549 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.549 EAL: Selected IOVA mode 'PA' 00:04:48.807 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.807 00:04:48.807 00:04:48.807 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.807 http://cunit.sourceforge.net/ 00:04:48.807 00:04:48.807 00:04:48.807 Suite: memory 00:04:48.807 Test: test ... 00:04:48.807 register 0x200000200000 2097152 00:04:48.807 malloc 3145728 00:04:48.807 register 0x200000400000 4194304 00:04:48.807 buf 0x200000500000 len 3145728 PASSED 00:04:48.807 malloc 64 00:04:48.807 buf 0x2000004fff40 len 64 PASSED 00:04:48.807 malloc 4194304 00:04:48.807 register 0x200000800000 6291456 00:04:48.807 buf 0x200000a00000 len 4194304 PASSED 00:04:48.807 free 0x200000500000 3145728 00:04:48.807 free 0x2000004fff40 64 00:04:48.807 unregister 0x200000400000 4194304 PASSED 00:04:48.807 free 0x200000a00000 4194304 00:04:48.807 unregister 0x200000800000 6291456 PASSED 00:04:48.807 malloc 8388608 00:04:48.807 register 0x200000400000 10485760 00:04:48.807 buf 0x200000600000 len 8388608 PASSED 00:04:48.807 free 0x200000600000 8388608 00:04:48.807 unregister 0x200000400000 10485760 PASSED 00:04:48.807 passed 00:04:48.807 00:04:48.807 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.807 suites 1 1 n/a 0 0 00:04:48.807 tests 1 1 1 0 0 00:04:48.807 asserts 15 15 15 0 n/a 00:04:48.807 00:04:48.807 Elapsed time = 0.006 seconds 00:04:48.807 00:04:48.807 real 0m0.142s 00:04:48.807 user 0m0.019s 00:04:48.807 sys 0m0.021s 00:04:48.807 21:31:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.807 ************************************ 00:04:48.807 END TEST env_mem_callbacks 00:04:48.807 ************************************ 00:04:48.807 21:31:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:48.807 00:04:48.807 real 0m2.165s 00:04:48.807 user 0m1.035s 00:04:48.807 sys 0m0.767s 00:04:48.807 21:31:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.807 21:31:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.807 ************************************ 00:04:48.807 END TEST env 00:04:48.807 ************************************ 00:04:48.807 21:31:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:48.807 21:31:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.807 21:31:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.807 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.807 ************************************ 00:04:48.807 START TEST rpc 00:04:48.807 ************************************ 00:04:48.807 21:31:49 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:48.807 * Looking for test storage... 00:04:48.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.807 21:31:49 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.807 21:31:49 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.807 21:31:49 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.072 21:31:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.072 21:31:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.072 21:31:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.072 21:31:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.072 21:31:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.072 21:31:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.072 21:31:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.072 21:31:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.072 21:31:49 rpc -- scripts/common.sh@345 -- # : 1 00:04:49.072 21:31:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.072 21:31:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.072 21:31:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.072 21:31:49 rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.072 21:31:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.072 21:31:49 rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.072 21:31:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.072 21:31:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.072 21:31:49 rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.072 21:31:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.072 21:31:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.072 21:31:49 rpc -- scripts/common.sh@368 -- # return 0 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.072 --rc genhtml_branch_coverage=1 00:04:49.072 --rc genhtml_function_coverage=1 00:04:49.072 --rc genhtml_legend=1 00:04:49.072 --rc geninfo_all_blocks=1 00:04:49.072 --rc geninfo_unexecuted_blocks=1 00:04:49.072 00:04:49.072 ' 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.072 --rc genhtml_branch_coverage=1 00:04:49.072 --rc genhtml_function_coverage=1 00:04:49.072 --rc genhtml_legend=1 00:04:49.072 --rc geninfo_all_blocks=1 00:04:49.072 --rc geninfo_unexecuted_blocks=1 00:04:49.072 00:04:49.072 ' 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.072 --rc genhtml_branch_coverage=1 00:04:49.072 --rc genhtml_function_coverage=1 00:04:49.072 --rc genhtml_legend=1 00:04:49.072 --rc geninfo_all_blocks=1 00:04:49.072 --rc geninfo_unexecuted_blocks=1 00:04:49.072 00:04:49.072 ' 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.072 --rc genhtml_branch_coverage=1 00:04:49.072 --rc genhtml_function_coverage=1 00:04:49.072 --rc genhtml_legend=1 00:04:49.072 --rc geninfo_all_blocks=1 00:04:49.072 --rc geninfo_unexecuted_blocks=1 00:04:49.072 00:04:49.072 ' 00:04:49.072 21:31:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56931 00:04:49.072 21:31:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:49.072 21:31:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.072 21:31:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56931 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 56931 ']' 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.072 21:31:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.072 [2024-12-10 21:31:49.729240] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:04:49.072 [2024-12-10 21:31:49.729341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56931 ] 00:04:49.331 [2024-12-10 21:31:49.873348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.331 [2024-12-10 21:31:49.905240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.331 [2024-12-10 21:31:49.905305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56931' to capture a snapshot of events at runtime. 00:04:49.331 [2024-12-10 21:31:49.905334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.331 [2024-12-10 21:31:49.905342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.331 [2024-12-10 21:31:49.905349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56931 for offline analysis/debug. 00:04:49.331 [2024-12-10 21:31:49.905682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.331 [2024-12-10 21:31:49.944969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.331 21:31:50 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.331 21:31:50 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.331 21:31:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.331 21:31:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.331 21:31:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:49.331 21:31:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:49.331 21:31:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.331 21:31:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.331 21:31:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.331 ************************************ 00:04:49.331 START TEST rpc_integrity 00:04:49.331 ************************************ 00:04:49.331 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:49.331 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.331 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.331 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.331 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.331 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.331 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.591 { 00:04:49.591 "name": "Malloc0", 00:04:49.591 "aliases": [ 00:04:49.591 "2e0dcfac-4c56-4296-9bae-0f4bfdb2d80d" 00:04:49.591 ], 00:04:49.591 "product_name": "Malloc disk", 00:04:49.591 "block_size": 512, 00:04:49.591 "num_blocks": 16384, 00:04:49.591 "uuid": "2e0dcfac-4c56-4296-9bae-0f4bfdb2d80d", 00:04:49.591 "assigned_rate_limits": { 00:04:49.591 "rw_ios_per_sec": 0, 00:04:49.591 "rw_mbytes_per_sec": 0, 00:04:49.591 "r_mbytes_per_sec": 0, 00:04:49.591 "w_mbytes_per_sec": 0 00:04:49.591 }, 00:04:49.591 "claimed": false, 00:04:49.591 "zoned": false, 00:04:49.591 "supported_io_types": { 00:04:49.591 "read": true, 00:04:49.591 "write": true, 00:04:49.591 "unmap": true, 00:04:49.591 "flush": true, 00:04:49.591 "reset": true, 00:04:49.591 "nvme_admin": false, 00:04:49.591 "nvme_io": false, 00:04:49.591 "nvme_io_md": false, 00:04:49.591 "write_zeroes": true, 00:04:49.591 "zcopy": true, 00:04:49.591 "get_zone_info": false, 00:04:49.591 "zone_management": false, 00:04:49.591 "zone_append": false, 00:04:49.591 "compare": false, 00:04:49.591 "compare_and_write": false, 00:04:49.591 "abort": true, 00:04:49.591 "seek_hole": false, 00:04:49.591 "seek_data": false, 00:04:49.591 "copy": true, 00:04:49.591 "nvme_iov_md": false 00:04:49.591 }, 00:04:49.591 "memory_domains": [ 00:04:49.591 { 00:04:49.591 "dma_device_id": "system", 00:04:49.591 "dma_device_type": 1 00:04:49.591 }, 00:04:49.591 { 00:04:49.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.591 "dma_device_type": 2 00:04:49.591 } 00:04:49.591 ], 00:04:49.591 "driver_specific": {} 00:04:49.591 } 00:04:49.591 ]' 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.591 [2024-12-10 21:31:50.236132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:49.591 [2024-12-10 21:31:50.236200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.591 [2024-12-10 21:31:50.236222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cb5b90 00:04:49.591 [2024-12-10 21:31:50.236231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.591 [2024-12-10 21:31:50.237920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.591 [2024-12-10 21:31:50.237962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.591 Passthru0 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.591 { 00:04:49.591 "name": "Malloc0", 00:04:49.591 "aliases": [ 00:04:49.591 "2e0dcfac-4c56-4296-9bae-0f4bfdb2d80d" 00:04:49.591 ], 00:04:49.591 "product_name": "Malloc disk", 00:04:49.591 "block_size": 512, 00:04:49.591 "num_blocks": 16384, 00:04:49.591 "uuid": "2e0dcfac-4c56-4296-9bae-0f4bfdb2d80d", 00:04:49.591 "assigned_rate_limits": { 00:04:49.591 "rw_ios_per_sec": 0, 00:04:49.591 "rw_mbytes_per_sec": 0, 00:04:49.591 "r_mbytes_per_sec": 0, 00:04:49.591 "w_mbytes_per_sec": 0 00:04:49.591 }, 00:04:49.591 "claimed": true, 00:04:49.591 "claim_type": "exclusive_write", 00:04:49.591 "zoned": false, 00:04:49.591 "supported_io_types": { 00:04:49.591 "read": true, 00:04:49.591 "write": true, 00:04:49.591 "unmap": true, 00:04:49.591 "flush": true, 00:04:49.591 "reset": true, 00:04:49.591 "nvme_admin": false, 00:04:49.591 "nvme_io": false, 00:04:49.591 "nvme_io_md": false, 00:04:49.591 "write_zeroes": true, 00:04:49.591 "zcopy": true, 00:04:49.591 "get_zone_info": false, 00:04:49.591 "zone_management": false, 00:04:49.591 "zone_append": false, 00:04:49.591 "compare": false, 00:04:49.591 "compare_and_write": false, 00:04:49.591 "abort": true, 00:04:49.591 "seek_hole": false, 00:04:49.591 "seek_data": false, 00:04:49.591 "copy": true, 00:04:49.591 "nvme_iov_md": false 00:04:49.591 }, 00:04:49.591 "memory_domains": [ 00:04:49.591 { 00:04:49.591 "dma_device_id": "system", 00:04:49.591 "dma_device_type": 1 00:04:49.591 }, 00:04:49.591 { 00:04:49.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.591 "dma_device_type": 2 00:04:49.591 } 00:04:49.591 ], 00:04:49.591 "driver_specific": {} 00:04:49.591 }, 00:04:49.591 { 00:04:49.591 "name": "Passthru0", 00:04:49.591 "aliases": [ 00:04:49.591 "29dcf882-e602-51c4-b4ae-c8d919c45539" 00:04:49.591 ], 00:04:49.591 "product_name": "passthru", 00:04:49.591 "block_size": 512, 00:04:49.591 "num_blocks": 16384, 00:04:49.591 "uuid": "29dcf882-e602-51c4-b4ae-c8d919c45539", 00:04:49.591 "assigned_rate_limits": { 00:04:49.591 "rw_ios_per_sec": 0, 00:04:49.591 "rw_mbytes_per_sec": 0, 00:04:49.591 "r_mbytes_per_sec": 0, 00:04:49.591 "w_mbytes_per_sec": 0 00:04:49.591 }, 00:04:49.591 "claimed": false, 00:04:49.591 "zoned": false, 00:04:49.591 "supported_io_types": { 00:04:49.591 "read": true, 00:04:49.591 "write": true, 00:04:49.591 "unmap": true, 00:04:49.591 "flush": true, 00:04:49.591 "reset": true, 00:04:49.591 "nvme_admin": false, 00:04:49.591 "nvme_io": false, 00:04:49.591 "nvme_io_md": false, 00:04:49.591 "write_zeroes": true, 00:04:49.591 "zcopy": true, 00:04:49.591 "get_zone_info": false, 00:04:49.591 "zone_management": false, 00:04:49.591 "zone_append": false, 00:04:49.591 "compare": false, 00:04:49.591 "compare_and_write": false, 00:04:49.591 "abort": true, 00:04:49.591 "seek_hole": false, 00:04:49.591 "seek_data": false, 00:04:49.591 "copy": true, 00:04:49.591 "nvme_iov_md": false 00:04:49.591 }, 00:04:49.591 "memory_domains": [ 00:04:49.591 { 00:04:49.591 "dma_device_id": "system", 00:04:49.591 "dma_device_type": 1 00:04:49.591 }, 00:04:49.591 { 00:04:49.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.591 "dma_device_type": 2 00:04:49.591 } 00:04:49.591 ], 00:04:49.591 "driver_specific": { 00:04:49.591 "passthru": { 00:04:49.591 "name": "Passthru0", 00:04:49.591 "base_bdev_name": "Malloc0" 00:04:49.591 } 00:04:49.591 } 00:04:49.591 } 00:04:49.591 ]' 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.591 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.591 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.850 21:31:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.850 00:04:49.850 real 0m0.320s 00:04:49.850 user 0m0.215s 00:04:49.850 sys 0m0.038s 00:04:49.850 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.850 21:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.850 ************************************ 00:04:49.850 END TEST rpc_integrity 00:04:49.850 ************************************ 00:04:49.850 21:31:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.850 21:31:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.850 21:31:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.850 21:31:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.850 ************************************ 00:04:49.850 START TEST rpc_plugins 00:04:49.850 ************************************ 00:04:49.850 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:49.850 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.850 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.850 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.850 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.850 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.850 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.850 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.850 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.850 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.850 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.850 { 00:04:49.850 "name": "Malloc1", 00:04:49.850 "aliases": [ 00:04:49.850 "27aebae2-94bd-40e7-88a3-04e01133bd1d" 00:04:49.850 ], 00:04:49.850 "product_name": "Malloc disk", 00:04:49.850 "block_size": 4096, 00:04:49.850 "num_blocks": 256, 00:04:49.850 "uuid": "27aebae2-94bd-40e7-88a3-04e01133bd1d", 00:04:49.850 "assigned_rate_limits": { 00:04:49.850 "rw_ios_per_sec": 0, 00:04:49.850 "rw_mbytes_per_sec": 0, 00:04:49.850 "r_mbytes_per_sec": 0, 00:04:49.850 "w_mbytes_per_sec": 0 00:04:49.850 }, 00:04:49.850 "claimed": false, 00:04:49.850 "zoned": false, 00:04:49.850 "supported_io_types": { 00:04:49.850 "read": true, 00:04:49.850 "write": true, 00:04:49.850 "unmap": true, 00:04:49.850 "flush": true, 00:04:49.850 "reset": true, 00:04:49.851 "nvme_admin": false, 00:04:49.851 "nvme_io": false, 00:04:49.851 "nvme_io_md": false, 00:04:49.851 "write_zeroes": true, 00:04:49.851 "zcopy": true, 00:04:49.851 "get_zone_info": false, 00:04:49.851 "zone_management": false, 00:04:49.851 "zone_append": false, 00:04:49.851 "compare": false, 00:04:49.851 "compare_and_write": false, 00:04:49.851 "abort": true, 00:04:49.851 "seek_hole": false, 00:04:49.851 "seek_data": false, 00:04:49.851 "copy": true, 00:04:49.851 "nvme_iov_md": false 00:04:49.851 }, 00:04:49.851 "memory_domains": [ 00:04:49.851 { 00:04:49.851 "dma_device_id": "system", 00:04:49.851 "dma_device_type": 1 00:04:49.851 }, 00:04:49.851 { 00:04:49.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.851 "dma_device_type": 2 00:04:49.851 } 00:04:49.851 ], 00:04:49.851 "driver_specific": {} 00:04:49.851 } 00:04:49.851 ]' 00:04:49.851 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:49.851 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.851 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.851 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.851 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.851 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.851 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.851 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.851 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:49.851 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.851 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.851 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:50.109 21:31:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:50.109 00:04:50.109 real 0m0.186s 00:04:50.109 user 0m0.113s 00:04:50.109 sys 0m0.020s 00:04:50.109 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.109 ************************************ 00:04:50.109 21:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.109 END TEST rpc_plugins 00:04:50.109 ************************************ 00:04:50.109 21:31:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:50.109 21:31:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.109 21:31:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.109 21:31:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.109 ************************************ 00:04:50.109 START TEST rpc_trace_cmd_test 00:04:50.109 ************************************ 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:50.109 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56931", 00:04:50.109 "tpoint_group_mask": "0x8", 00:04:50.109 "iscsi_conn": { 00:04:50.109 "mask": "0x2", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "scsi": { 00:04:50.109 "mask": "0x4", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "bdev": { 00:04:50.109 "mask": "0x8", 00:04:50.109 "tpoint_mask": "0xffffffffffffffff" 00:04:50.109 }, 00:04:50.109 "nvmf_rdma": { 00:04:50.109 "mask": "0x10", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "nvmf_tcp": { 00:04:50.109 "mask": "0x20", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "ftl": { 00:04:50.109 "mask": "0x40", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "blobfs": { 00:04:50.109 "mask": "0x80", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "dsa": { 00:04:50.109 "mask": "0x200", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "thread": { 00:04:50.109 "mask": "0x400", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "nvme_pcie": { 00:04:50.109 "mask": "0x800", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "iaa": { 00:04:50.109 "mask": "0x1000", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "nvme_tcp": { 00:04:50.109 "mask": "0x2000", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "bdev_nvme": { 00:04:50.109 "mask": "0x4000", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "sock": { 00:04:50.109 "mask": "0x8000", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "blob": { 00:04:50.109 "mask": "0x10000", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "bdev_raid": { 00:04:50.109 "mask": "0x20000", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 }, 00:04:50.109 "scheduler": { 00:04:50.109 "mask": "0x40000", 00:04:50.109 "tpoint_mask": "0x0" 00:04:50.109 } 00:04:50.109 }' 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:50.109 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:50.369 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:50.369 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:50.369 21:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:50.369 00:04:50.369 real 0m0.286s 00:04:50.369 user 0m0.253s 00:04:50.369 sys 0m0.026s 00:04:50.369 21:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.369 21:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 ************************************ 00:04:50.369 END TEST rpc_trace_cmd_test 00:04:50.369 ************************************ 00:04:50.369 21:31:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:50.369 21:31:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.369 21:31:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.369 21:31:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.369 21:31:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.369 21:31:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 ************************************ 00:04:50.369 START TEST rpc_daemon_integrity 00:04:50.369 ************************************ 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.369 { 00:04:50.369 "name": "Malloc2", 00:04:50.369 "aliases": [ 00:04:50.369 "be2fa58a-2087-4bf5-a087-c4f8eb2e914a" 00:04:50.369 ], 00:04:50.369 "product_name": "Malloc disk", 00:04:50.369 "block_size": 512, 00:04:50.369 "num_blocks": 16384, 00:04:50.369 "uuid": "be2fa58a-2087-4bf5-a087-c4f8eb2e914a", 00:04:50.369 "assigned_rate_limits": { 00:04:50.369 "rw_ios_per_sec": 0, 00:04:50.369 "rw_mbytes_per_sec": 0, 00:04:50.369 "r_mbytes_per_sec": 0, 00:04:50.369 "w_mbytes_per_sec": 0 00:04:50.369 }, 00:04:50.369 "claimed": false, 00:04:50.369 "zoned": false, 00:04:50.369 "supported_io_types": { 00:04:50.369 "read": true, 00:04:50.369 "write": true, 00:04:50.369 "unmap": true, 00:04:50.369 "flush": true, 00:04:50.369 "reset": true, 00:04:50.369 "nvme_admin": false, 00:04:50.369 "nvme_io": false, 00:04:50.369 "nvme_io_md": false, 00:04:50.369 "write_zeroes": true, 00:04:50.369 "zcopy": true, 00:04:50.369 "get_zone_info": false, 00:04:50.369 "zone_management": false, 00:04:50.369 "zone_append": false, 00:04:50.369 "compare": false, 00:04:50.369 "compare_and_write": false, 00:04:50.369 "abort": true, 00:04:50.369 "seek_hole": false, 00:04:50.369 "seek_data": false, 00:04:50.369 "copy": true, 00:04:50.369 "nvme_iov_md": false 00:04:50.369 }, 00:04:50.369 "memory_domains": [ 00:04:50.369 { 00:04:50.369 "dma_device_id": "system", 00:04:50.369 "dma_device_type": 1 00:04:50.369 }, 00:04:50.369 { 00:04:50.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.369 "dma_device_type": 2 00:04:50.369 } 00:04:50.369 ], 00:04:50.369 "driver_specific": {} 00:04:50.369 } 00:04:50.369 ]' 00:04:50.369 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.628 [2024-12-10 21:31:51.184556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.628 [2024-12-10 21:31:51.184609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.628 [2024-12-10 21:31:51.184627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1b440 00:04:50.628 [2024-12-10 21:31:51.184636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.628 [2024-12-10 21:31:51.186562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.628 [2024-12-10 21:31:51.186604] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.628 Passthru0 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.628 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.628 { 00:04:50.628 "name": "Malloc2", 00:04:50.628 "aliases": [ 00:04:50.628 "be2fa58a-2087-4bf5-a087-c4f8eb2e914a" 00:04:50.628 ], 00:04:50.628 "product_name": "Malloc disk", 00:04:50.628 "block_size": 512, 00:04:50.628 "num_blocks": 16384, 00:04:50.628 "uuid": "be2fa58a-2087-4bf5-a087-c4f8eb2e914a", 00:04:50.628 "assigned_rate_limits": { 00:04:50.628 "rw_ios_per_sec": 0, 00:04:50.628 "rw_mbytes_per_sec": 0, 00:04:50.628 "r_mbytes_per_sec": 0, 00:04:50.628 "w_mbytes_per_sec": 0 00:04:50.628 }, 00:04:50.628 "claimed": true, 00:04:50.628 "claim_type": "exclusive_write", 00:04:50.628 "zoned": false, 00:04:50.628 "supported_io_types": { 00:04:50.628 "read": true, 00:04:50.628 "write": true, 00:04:50.628 "unmap": true, 00:04:50.628 "flush": true, 00:04:50.628 "reset": true, 00:04:50.628 "nvme_admin": false, 00:04:50.628 "nvme_io": false, 00:04:50.629 "nvme_io_md": false, 00:04:50.629 "write_zeroes": true, 00:04:50.629 "zcopy": true, 00:04:50.629 "get_zone_info": false, 00:04:50.629 "zone_management": false, 00:04:50.629 "zone_append": false, 00:04:50.629 "compare": false, 00:04:50.629 "compare_and_write": false, 00:04:50.629 "abort": true, 00:04:50.629 "seek_hole": false, 00:04:50.629 "seek_data": false, 00:04:50.629 "copy": true, 00:04:50.629 "nvme_iov_md": false 00:04:50.629 }, 00:04:50.629 "memory_domains": [ 00:04:50.629 { 00:04:50.629 "dma_device_id": "system", 00:04:50.629 "dma_device_type": 1 00:04:50.629 }, 00:04:50.629 { 00:04:50.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.629 "dma_device_type": 2 00:04:50.629 } 00:04:50.629 ], 00:04:50.629 "driver_specific": {} 00:04:50.629 }, 00:04:50.629 { 00:04:50.629 "name": "Passthru0", 00:04:50.629 "aliases": [ 00:04:50.629 "8793ccc5-5371-58ba-af9c-4e89cb898c54" 00:04:50.629 ], 00:04:50.629 "product_name": "passthru", 00:04:50.629 "block_size": 512, 00:04:50.629 "num_blocks": 16384, 00:04:50.629 "uuid": "8793ccc5-5371-58ba-af9c-4e89cb898c54", 00:04:50.629 "assigned_rate_limits": { 00:04:50.629 "rw_ios_per_sec": 0, 00:04:50.629 "rw_mbytes_per_sec": 0, 00:04:50.629 "r_mbytes_per_sec": 0, 00:04:50.629 "w_mbytes_per_sec": 0 00:04:50.629 }, 00:04:50.629 "claimed": false, 00:04:50.629 "zoned": false, 00:04:50.629 "supported_io_types": { 00:04:50.629 "read": true, 00:04:50.629 "write": true, 00:04:50.629 "unmap": true, 00:04:50.629 "flush": true, 00:04:50.629 "reset": true, 00:04:50.629 "nvme_admin": false, 00:04:50.629 "nvme_io": false, 00:04:50.629 "nvme_io_md": false, 00:04:50.629 "write_zeroes": true, 00:04:50.629 "zcopy": true, 00:04:50.629 "get_zone_info": false, 00:04:50.629 "zone_management": false, 00:04:50.629 "zone_append": false, 00:04:50.629 "compare": false, 00:04:50.629 "compare_and_write": false, 00:04:50.629 "abort": true, 00:04:50.629 "seek_hole": false, 00:04:50.629 "seek_data": false, 00:04:50.629 "copy": true, 00:04:50.629 "nvme_iov_md": false 00:04:50.629 }, 00:04:50.629 "memory_domains": [ 00:04:50.629 { 00:04:50.629 "dma_device_id": "system", 00:04:50.629 "dma_device_type": 1 00:04:50.629 }, 00:04:50.629 { 00:04:50.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.629 "dma_device_type": 2 00:04:50.629 } 00:04:50.629 ], 00:04:50.629 "driver_specific": { 00:04:50.629 "passthru": { 00:04:50.629 "name": "Passthru0", 00:04:50.629 "base_bdev_name": "Malloc2" 00:04:50.629 } 00:04:50.629 } 00:04:50.629 } 00:04:50.629 ]' 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.629 00:04:50.629 real 0m0.325s 00:04:50.629 user 0m0.223s 00:04:50.629 sys 0m0.035s 00:04:50.629 ************************************ 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.629 21:31:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 END TEST rpc_daemon_integrity 00:04:50.629 ************************************ 00:04:50.629 21:31:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.629 21:31:51 rpc -- rpc/rpc.sh@84 -- # killprocess 56931 00:04:50.629 21:31:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 56931 ']' 00:04:50.629 21:31:51 rpc -- common/autotest_common.sh@958 -- # kill -0 56931 00:04:50.629 21:31:51 rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.629 21:31:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.629 21:31:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56931 00:04:50.888 21:31:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.888 killing process with pid 56931 00:04:50.888 21:31:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.888 21:31:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56931' 00:04:50.888 21:31:51 rpc -- common/autotest_common.sh@973 -- # kill 56931 00:04:50.888 21:31:51 rpc -- common/autotest_common.sh@978 -- # wait 56931 00:04:51.147 00:04:51.147 real 0m2.227s 00:04:51.147 user 0m3.013s 00:04:51.147 sys 0m0.562s 00:04:51.147 21:31:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.147 21:31:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.147 ************************************ 00:04:51.147 END TEST rpc 00:04:51.147 ************************************ 00:04:51.147 21:31:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:51.147 21:31:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.147 21:31:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.147 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:04:51.147 ************************************ 00:04:51.147 START TEST skip_rpc 00:04:51.147 ************************************ 00:04:51.147 21:31:51 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:51.147 * Looking for test storage... 00:04:51.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:51.147 21:31:51 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.147 21:31:51 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.147 21:31:51 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.406 21:31:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.406 --rc genhtml_branch_coverage=1 00:04:51.406 --rc genhtml_function_coverage=1 00:04:51.406 --rc genhtml_legend=1 00:04:51.406 --rc geninfo_all_blocks=1 00:04:51.406 --rc geninfo_unexecuted_blocks=1 00:04:51.406 00:04:51.406 ' 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.406 --rc genhtml_branch_coverage=1 00:04:51.406 --rc genhtml_function_coverage=1 00:04:51.406 --rc genhtml_legend=1 00:04:51.406 --rc geninfo_all_blocks=1 00:04:51.406 --rc geninfo_unexecuted_blocks=1 00:04:51.406 00:04:51.406 ' 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.406 --rc genhtml_branch_coverage=1 00:04:51.406 --rc genhtml_function_coverage=1 00:04:51.406 --rc genhtml_legend=1 00:04:51.406 --rc geninfo_all_blocks=1 00:04:51.406 --rc geninfo_unexecuted_blocks=1 00:04:51.406 00:04:51.406 ' 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.406 --rc genhtml_branch_coverage=1 00:04:51.406 --rc genhtml_function_coverage=1 00:04:51.406 --rc genhtml_legend=1 00:04:51.406 --rc geninfo_all_blocks=1 00:04:51.406 --rc geninfo_unexecuted_blocks=1 00:04:51.406 00:04:51.406 ' 00:04:51.406 21:31:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.406 21:31:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:51.406 21:31:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.406 21:31:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.407 21:31:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.407 ************************************ 00:04:51.407 START TEST skip_rpc 00:04:51.407 ************************************ 00:04:51.407 21:31:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:51.407 21:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57124 00:04:51.407 21:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:51.407 21:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.407 21:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:51.407 [2024-12-10 21:31:52.030505] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:04:51.407 [2024-12-10 21:31:52.030777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57124 ] 00:04:51.407 [2024-12-10 21:31:52.177699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.665 [2024-12-10 21:31:52.211906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.665 [2024-12-10 21:31:52.251576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57124 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57124 ']' 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57124 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.935 21:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57124 00:04:56.935 21:31:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.935 21:31:57 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.935 21:31:57 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57124' 00:04:56.935 killing process with pid 57124 00:04:56.935 21:31:57 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57124 00:04:56.935 21:31:57 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57124 00:04:56.935 00:04:56.935 ************************************ 00:04:56.935 END TEST skip_rpc 00:04:56.935 ************************************ 00:04:56.935 real 0m5.295s 00:04:56.935 user 0m5.010s 00:04:56.935 sys 0m0.197s 00:04:56.935 21:31:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.935 21:31:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.935 21:31:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.935 21:31:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.935 21:31:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.935 21:31:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.935 ************************************ 00:04:56.935 START TEST skip_rpc_with_json 00:04:56.935 ************************************ 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57205 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57205 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57205 ']' 00:04:56.935 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.936 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.936 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.936 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.936 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.936 [2024-12-10 21:31:57.382098] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:04:56.936 [2024-12-10 21:31:57.382422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57205 ] 00:04:56.936 [2024-12-10 21:31:57.532605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.936 [2024-12-10 21:31:57.578249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.936 [2024-12-10 21:31:57.630289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.195 [2024-12-10 21:31:57.777843] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:57.195 request: 00:04:57.195 { 00:04:57.195 "trtype": "tcp", 00:04:57.195 "method": "nvmf_get_transports", 00:04:57.195 "req_id": 1 00:04:57.195 } 00:04:57.195 Got JSON-RPC error response 00:04:57.195 response: 00:04:57.195 { 00:04:57.195 "code": -19, 00:04:57.195 "message": "No such device" 00:04:57.195 } 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.195 [2024-12-10 21:31:57.786614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.195 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.196 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.196 { 00:04:57.196 "subsystems": [ 00:04:57.196 { 00:04:57.196 "subsystem": "fsdev", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "fsdev_set_opts", 00:04:57.196 "params": { 00:04:57.196 "fsdev_io_pool_size": 65535, 00:04:57.196 "fsdev_io_cache_size": 256 00:04:57.196 } 00:04:57.196 } 00:04:57.196 ] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "keyring", 00:04:57.196 "config": [] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "iobuf", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "iobuf_set_options", 00:04:57.196 "params": { 00:04:57.196 "small_pool_count": 8192, 00:04:57.196 "large_pool_count": 1024, 00:04:57.196 "small_bufsize": 8192, 00:04:57.196 "large_bufsize": 135168, 00:04:57.196 "enable_numa": false 00:04:57.196 } 00:04:57.196 } 00:04:57.196 ] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "sock", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "sock_set_default_impl", 00:04:57.196 "params": { 00:04:57.196 "impl_name": "uring" 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "sock_impl_set_options", 00:04:57.196 "params": { 00:04:57.196 "impl_name": "ssl", 00:04:57.196 "recv_buf_size": 4096, 00:04:57.196 "send_buf_size": 4096, 00:04:57.196 "enable_recv_pipe": true, 00:04:57.196 "enable_quickack": false, 00:04:57.196 "enable_placement_id": 0, 00:04:57.196 "enable_zerocopy_send_server": true, 00:04:57.196 "enable_zerocopy_send_client": false, 00:04:57.196 "zerocopy_threshold": 0, 00:04:57.196 "tls_version": 0, 00:04:57.196 "enable_ktls": false 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "sock_impl_set_options", 00:04:57.196 "params": { 00:04:57.196 "impl_name": "posix", 00:04:57.196 "recv_buf_size": 2097152, 00:04:57.196 "send_buf_size": 2097152, 00:04:57.196 "enable_recv_pipe": true, 00:04:57.196 "enable_quickack": false, 00:04:57.196 "enable_placement_id": 0, 00:04:57.196 "enable_zerocopy_send_server": true, 00:04:57.196 "enable_zerocopy_send_client": false, 00:04:57.196 "zerocopy_threshold": 0, 00:04:57.196 "tls_version": 0, 00:04:57.196 "enable_ktls": false 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "sock_impl_set_options", 00:04:57.196 "params": { 00:04:57.196 "impl_name": "uring", 00:04:57.196 "recv_buf_size": 2097152, 00:04:57.196 "send_buf_size": 2097152, 00:04:57.196 "enable_recv_pipe": true, 00:04:57.196 "enable_quickack": false, 00:04:57.196 "enable_placement_id": 0, 00:04:57.196 "enable_zerocopy_send_server": false, 00:04:57.196 "enable_zerocopy_send_client": false, 00:04:57.196 "zerocopy_threshold": 0, 00:04:57.196 "tls_version": 0, 00:04:57.196 "enable_ktls": false 00:04:57.196 } 00:04:57.196 } 00:04:57.196 ] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "vmd", 00:04:57.196 "config": [] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "accel", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "accel_set_options", 00:04:57.196 "params": { 00:04:57.196 "small_cache_size": 128, 00:04:57.196 "large_cache_size": 16, 00:04:57.196 "task_count": 2048, 00:04:57.196 "sequence_count": 2048, 00:04:57.196 "buf_count": 2048 00:04:57.196 } 00:04:57.196 } 00:04:57.196 ] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "bdev", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "bdev_set_options", 00:04:57.196 "params": { 00:04:57.196 "bdev_io_pool_size": 65535, 00:04:57.196 "bdev_io_cache_size": 256, 00:04:57.196 "bdev_auto_examine": true, 00:04:57.196 "iobuf_small_cache_size": 128, 00:04:57.196 "iobuf_large_cache_size": 16 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "bdev_raid_set_options", 00:04:57.196 "params": { 00:04:57.196 "process_window_size_kb": 1024, 00:04:57.196 "process_max_bandwidth_mb_sec": 0 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "bdev_iscsi_set_options", 00:04:57.196 "params": { 00:04:57.196 "timeout_sec": 30 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "bdev_nvme_set_options", 00:04:57.196 "params": { 00:04:57.196 "action_on_timeout": "none", 00:04:57.196 "timeout_us": 0, 00:04:57.196 "timeout_admin_us": 0, 00:04:57.196 "keep_alive_timeout_ms": 10000, 00:04:57.196 "arbitration_burst": 0, 00:04:57.196 "low_priority_weight": 0, 00:04:57.196 "medium_priority_weight": 0, 00:04:57.196 "high_priority_weight": 0, 00:04:57.196 "nvme_adminq_poll_period_us": 10000, 00:04:57.196 "nvme_ioq_poll_period_us": 0, 00:04:57.196 "io_queue_requests": 0, 00:04:57.196 "delay_cmd_submit": true, 00:04:57.196 "transport_retry_count": 4, 00:04:57.196 "bdev_retry_count": 3, 00:04:57.196 "transport_ack_timeout": 0, 00:04:57.196 "ctrlr_loss_timeout_sec": 0, 00:04:57.196 "reconnect_delay_sec": 0, 00:04:57.196 "fast_io_fail_timeout_sec": 0, 00:04:57.196 "disable_auto_failback": false, 00:04:57.196 "generate_uuids": false, 00:04:57.196 "transport_tos": 0, 00:04:57.196 "nvme_error_stat": false, 00:04:57.196 "rdma_srq_size": 0, 00:04:57.196 "io_path_stat": false, 00:04:57.196 "allow_accel_sequence": false, 00:04:57.196 "rdma_max_cq_size": 0, 00:04:57.196 "rdma_cm_event_timeout_ms": 0, 00:04:57.196 "dhchap_digests": [ 00:04:57.196 "sha256", 00:04:57.196 "sha384", 00:04:57.196 "sha512" 00:04:57.196 ], 00:04:57.196 "dhchap_dhgroups": [ 00:04:57.196 "null", 00:04:57.196 "ffdhe2048", 00:04:57.196 "ffdhe3072", 00:04:57.196 "ffdhe4096", 00:04:57.196 "ffdhe6144", 00:04:57.196 "ffdhe8192" 00:04:57.196 ], 00:04:57.196 "rdma_umr_per_io": false 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "bdev_nvme_set_hotplug", 00:04:57.196 "params": { 00:04:57.196 "period_us": 100000, 00:04:57.196 "enable": false 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "bdev_wait_for_examine" 00:04:57.196 } 00:04:57.196 ] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "scsi", 00:04:57.196 "config": null 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "scheduler", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "framework_set_scheduler", 00:04:57.196 "params": { 00:04:57.196 "name": "static" 00:04:57.196 } 00:04:57.196 } 00:04:57.196 ] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "vhost_scsi", 00:04:57.196 "config": [] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "vhost_blk", 00:04:57.196 "config": [] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "ublk", 00:04:57.196 "config": [] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "nbd", 00:04:57.196 "config": [] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "nvmf", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "nvmf_set_config", 00:04:57.196 "params": { 00:04:57.196 "discovery_filter": "match_any", 00:04:57.196 "admin_cmd_passthru": { 00:04:57.196 "identify_ctrlr": false 00:04:57.196 }, 00:04:57.196 "dhchap_digests": [ 00:04:57.196 "sha256", 00:04:57.196 "sha384", 00:04:57.196 "sha512" 00:04:57.196 ], 00:04:57.196 "dhchap_dhgroups": [ 00:04:57.196 "null", 00:04:57.196 "ffdhe2048", 00:04:57.196 "ffdhe3072", 00:04:57.196 "ffdhe4096", 00:04:57.196 "ffdhe6144", 00:04:57.196 "ffdhe8192" 00:04:57.196 ] 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "nvmf_set_max_subsystems", 00:04:57.196 "params": { 00:04:57.196 "max_subsystems": 1024 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "nvmf_set_crdt", 00:04:57.196 "params": { 00:04:57.196 "crdt1": 0, 00:04:57.196 "crdt2": 0, 00:04:57.196 "crdt3": 0 00:04:57.196 } 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "method": "nvmf_create_transport", 00:04:57.196 "params": { 00:04:57.196 "trtype": "TCP", 00:04:57.196 "max_queue_depth": 128, 00:04:57.196 "max_io_qpairs_per_ctrlr": 127, 00:04:57.196 "in_capsule_data_size": 4096, 00:04:57.196 "max_io_size": 131072, 00:04:57.196 "io_unit_size": 131072, 00:04:57.196 "max_aq_depth": 128, 00:04:57.196 "num_shared_buffers": 511, 00:04:57.196 "buf_cache_size": 4294967295, 00:04:57.196 "dif_insert_or_strip": false, 00:04:57.196 "zcopy": false, 00:04:57.196 "c2h_success": true, 00:04:57.196 "sock_priority": 0, 00:04:57.196 "abort_timeout_sec": 1, 00:04:57.196 "ack_timeout": 0, 00:04:57.196 "data_wr_pool_size": 0 00:04:57.196 } 00:04:57.196 } 00:04:57.196 ] 00:04:57.196 }, 00:04:57.196 { 00:04:57.196 "subsystem": "iscsi", 00:04:57.196 "config": [ 00:04:57.196 { 00:04:57.196 "method": "iscsi_set_options", 00:04:57.196 "params": { 00:04:57.196 "node_base": "iqn.2016-06.io.spdk", 00:04:57.196 "max_sessions": 128, 00:04:57.196 "max_connections_per_session": 2, 00:04:57.196 "max_queue_depth": 64, 00:04:57.196 "default_time2wait": 2, 00:04:57.196 "default_time2retain": 20, 00:04:57.196 "first_burst_length": 8192, 00:04:57.196 "immediate_data": true, 00:04:57.196 "allow_duplicated_isid": false, 00:04:57.196 "error_recovery_level": 0, 00:04:57.196 "nop_timeout": 60, 00:04:57.196 "nop_in_interval": 30, 00:04:57.196 "disable_chap": false, 00:04:57.196 "require_chap": false, 00:04:57.197 "mutual_chap": false, 00:04:57.197 "chap_group": 0, 00:04:57.197 "max_large_datain_per_connection": 64, 00:04:57.197 "max_r2t_per_connection": 4, 00:04:57.197 "pdu_pool_size": 36864, 00:04:57.197 "immediate_data_pool_size": 16384, 00:04:57.197 "data_out_pool_size": 2048 00:04:57.197 } 00:04:57.197 } 00:04:57.197 ] 00:04:57.197 } 00:04:57.197 ] 00:04:57.197 } 00:04:57.197 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:57.197 21:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57205 00:04:57.197 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57205 ']' 00:04:57.197 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57205 00:04:57.197 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:57.197 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.197 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57205 00:04:57.455 killing process with pid 57205 00:04:57.455 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.455 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.455 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57205' 00:04:57.455 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57205 00:04:57.455 21:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57205 00:04:57.716 21:31:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57225 00:04:57.716 21:31:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.716 21:31:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57225 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57225 ']' 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57225 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57225 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.981 killing process with pid 57225 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57225' 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57225 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57225 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.981 00:05:02.981 real 0m6.386s 00:05:02.981 user 0m6.051s 00:05:02.981 sys 0m0.514s 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.981 ************************************ 00:05:02.981 END TEST skip_rpc_with_json 00:05:02.981 ************************************ 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.981 21:32:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:02.981 21:32:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.981 21:32:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.981 21:32:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.981 ************************************ 00:05:02.981 START TEST skip_rpc_with_delay 00:05:02.981 ************************************ 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:02.981 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.240 [2024-12-10 21:32:03.817611] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:03.240 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:03.240 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.240 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.240 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.240 00:05:03.240 real 0m0.087s 00:05:03.240 user 0m0.057s 00:05:03.240 sys 0m0.028s 00:05:03.240 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.240 ************************************ 00:05:03.240 END TEST skip_rpc_with_delay 00:05:03.240 ************************************ 00:05:03.240 21:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:03.240 21:32:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:03.240 21:32:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:03.240 21:32:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:03.240 21:32:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.240 21:32:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.240 21:32:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.240 ************************************ 00:05:03.240 START TEST exit_on_failed_rpc_init 00:05:03.240 ************************************ 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57335 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57335 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57335 ']' 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.240 21:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.240 [2024-12-10 21:32:03.960671] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:03.240 [2024-12-10 21:32:03.960771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57335 ] 00:05:03.513 [2024-12-10 21:32:04.114651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.513 [2024-12-10 21:32:04.158310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.513 [2024-12-10 21:32:04.205909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:04.481 21:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.481 [2024-12-10 21:32:05.068111] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:04.481 [2024-12-10 21:32:05.068208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57353 ] 00:05:04.481 [2024-12-10 21:32:05.220548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.742 [2024-12-10 21:32:05.261616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.742 [2024-12-10 21:32:05.261718] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:04.742 [2024-12-10 21:32:05.261737] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:04.742 [2024-12-10 21:32:05.261747] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57335 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57335 ']' 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57335 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57335 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57335' 00:05:04.742 killing process with pid 57335 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57335 00:05:04.742 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57335 00:05:05.000 00:05:05.000 real 0m1.714s 00:05:05.000 user 0m2.109s 00:05:05.000 sys 0m0.339s 00:05:05.000 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.000 ************************************ 00:05:05.000 END TEST exit_on_failed_rpc_init 00:05:05.000 ************************************ 00:05:05.000 21:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.000 21:32:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:05.000 00:05:05.000 real 0m13.897s 00:05:05.000 user 0m13.416s 00:05:05.000 sys 0m1.288s 00:05:05.000 21:32:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.000 ************************************ 00:05:05.000 21:32:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.000 END TEST skip_rpc 00:05:05.000 ************************************ 00:05:05.000 21:32:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:05.000 21:32:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.000 21:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.000 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.000 ************************************ 00:05:05.000 START TEST rpc_client 00:05:05.000 ************************************ 00:05:05.000 21:32:05 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:05.259 * Looking for test storage... 00:05:05.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:05.259 21:32:05 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.259 21:32:05 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.259 21:32:05 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.259 21:32:05 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.259 21:32:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:05.259 21:32:05 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.259 21:32:05 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.259 --rc genhtml_branch_coverage=1 00:05:05.259 --rc genhtml_function_coverage=1 00:05:05.259 --rc genhtml_legend=1 00:05:05.259 --rc geninfo_all_blocks=1 00:05:05.259 --rc geninfo_unexecuted_blocks=1 00:05:05.259 00:05:05.259 ' 00:05:05.259 21:32:05 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.260 --rc genhtml_branch_coverage=1 00:05:05.260 --rc genhtml_function_coverage=1 00:05:05.260 --rc genhtml_legend=1 00:05:05.260 --rc geninfo_all_blocks=1 00:05:05.260 --rc geninfo_unexecuted_blocks=1 00:05:05.260 00:05:05.260 ' 00:05:05.260 21:32:05 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.260 --rc genhtml_branch_coverage=1 00:05:05.260 --rc genhtml_function_coverage=1 00:05:05.260 --rc genhtml_legend=1 00:05:05.260 --rc geninfo_all_blocks=1 00:05:05.260 --rc geninfo_unexecuted_blocks=1 00:05:05.260 00:05:05.260 ' 00:05:05.260 21:32:05 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.260 --rc genhtml_branch_coverage=1 00:05:05.260 --rc genhtml_function_coverage=1 00:05:05.260 --rc genhtml_legend=1 00:05:05.260 --rc geninfo_all_blocks=1 00:05:05.260 --rc geninfo_unexecuted_blocks=1 00:05:05.260 00:05:05.260 ' 00:05:05.260 21:32:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:05.260 OK 00:05:05.260 21:32:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:05.260 00:05:05.260 real 0m0.226s 00:05:05.260 user 0m0.143s 00:05:05.260 sys 0m0.092s 00:05:05.260 21:32:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.260 ************************************ 00:05:05.260 END TEST rpc_client 00:05:05.260 21:32:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:05.260 ************************************ 00:05:05.260 21:32:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:05.260 21:32:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.260 21:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.260 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.260 ************************************ 00:05:05.260 START TEST json_config 00:05:05.260 ************************************ 00:05:05.260 21:32:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:05.260 21:32:06 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.260 21:32:06 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.260 21:32:06 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.519 21:32:06 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.519 21:32:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.519 21:32:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.519 21:32:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.519 21:32:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.519 21:32:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.519 21:32:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.519 21:32:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.519 21:32:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:05.519 21:32:06 json_config -- scripts/common.sh@345 -- # : 1 00:05:05.519 21:32:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.519 21:32:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.519 21:32:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:05.519 21:32:06 json_config -- scripts/common.sh@353 -- # local d=1 00:05:05.519 21:32:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.519 21:32:06 json_config -- scripts/common.sh@355 -- # echo 1 00:05:05.519 21:32:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.519 21:32:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@353 -- # local d=2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.519 21:32:06 json_config -- scripts/common.sh@355 -- # echo 2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.519 21:32:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.519 21:32:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.519 21:32:06 json_config -- scripts/common.sh@368 -- # return 0 00:05:05.519 21:32:06 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.519 21:32:06 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.519 --rc genhtml_branch_coverage=1 00:05:05.519 --rc genhtml_function_coverage=1 00:05:05.519 --rc genhtml_legend=1 00:05:05.519 --rc geninfo_all_blocks=1 00:05:05.519 --rc geninfo_unexecuted_blocks=1 00:05:05.519 00:05:05.519 ' 00:05:05.519 21:32:06 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.519 --rc genhtml_branch_coverage=1 00:05:05.519 --rc genhtml_function_coverage=1 00:05:05.519 --rc genhtml_legend=1 00:05:05.519 --rc geninfo_all_blocks=1 00:05:05.519 --rc geninfo_unexecuted_blocks=1 00:05:05.519 00:05:05.519 ' 00:05:05.519 21:32:06 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.519 --rc genhtml_branch_coverage=1 00:05:05.519 --rc genhtml_function_coverage=1 00:05:05.519 --rc genhtml_legend=1 00:05:05.519 --rc geninfo_all_blocks=1 00:05:05.519 --rc geninfo_unexecuted_blocks=1 00:05:05.519 00:05:05.519 ' 00:05:05.519 21:32:06 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.519 --rc genhtml_branch_coverage=1 00:05:05.519 --rc genhtml_function_coverage=1 00:05:05.519 --rc genhtml_legend=1 00:05:05.519 --rc geninfo_all_blocks=1 00:05:05.519 --rc geninfo_unexecuted_blocks=1 00:05:05.519 00:05:05.519 ' 00:05:05.519 21:32:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.519 21:32:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:05.520 21:32:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.520 21:32:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.520 21:32:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.520 21:32:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.520 21:32:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.520 21:32:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.520 21:32:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.520 21:32:06 json_config -- paths/export.sh@5 -- # export PATH 00:05:05.520 21:32:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@51 -- # : 0 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.520 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.520 21:32:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.520 INFO: JSON configuration test init 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.520 21:32:06 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:05.520 21:32:06 json_config -- json_config/common.sh@9 -- # local app=target 00:05:05.520 21:32:06 json_config -- json_config/common.sh@10 -- # shift 00:05:05.520 21:32:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.520 21:32:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.520 21:32:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.520 21:32:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.520 21:32:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.520 21:32:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57492 00:05:05.520 Waiting for target to run... 00:05:05.520 21:32:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.520 21:32:06 json_config -- json_config/common.sh@25 -- # waitforlisten 57492 /var/tmp/spdk_tgt.sock 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@835 -- # '[' -z 57492 ']' 00:05:05.520 21:32:06 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.520 21:32:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.520 [2024-12-10 21:32:06.264717] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:05.520 [2024-12-10 21:32:06.264824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57492 ] 00:05:06.087 [2024-12-10 21:32:06.588359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.087 [2024-12-10 21:32:06.621647] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.654 21:32:07 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.654 00:05:06.654 21:32:07 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:06.654 21:32:07 json_config -- json_config/common.sh@26 -- # echo '' 00:05:06.654 21:32:07 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:06.654 21:32:07 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:06.654 21:32:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.654 21:32:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.654 21:32:07 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:06.654 21:32:07 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:06.654 21:32:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.654 21:32:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.654 21:32:07 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:06.654 21:32:07 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:06.654 21:32:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:07.219 [2024-12-10 21:32:07.708297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:07.219 21:32:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.219 21:32:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:07.219 21:32:07 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:07.219 21:32:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@54 -- # sort 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:07.477 21:32:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.477 21:32:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:07.477 21:32:08 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:07.477 21:32:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.477 21:32:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.735 21:32:08 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:07.735 21:32:08 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:07.735 21:32:08 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:07.735 21:32:08 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.735 21:32:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.992 MallocForNvmf0 00:05:07.992 21:32:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:07.992 21:32:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.250 MallocForNvmf1 00:05:08.250 21:32:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.250 21:32:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.508 [2024-12-10 21:32:09.155402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.508 21:32:09 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.508 21:32:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.766 21:32:09 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.766 21:32:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.024 21:32:09 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.024 21:32:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.281 21:32:09 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.281 21:32:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.539 [2024-12-10 21:32:10.284068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.539 21:32:10 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:09.539 21:32:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.539 21:32:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.797 21:32:10 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:09.797 21:32:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.797 21:32:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.797 21:32:10 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:09.797 21:32:10 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.797 21:32:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.055 MallocBdevForConfigChangeCheck 00:05:10.055 21:32:10 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:10.055 21:32:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.055 21:32:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.055 21:32:10 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:10.055 21:32:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.625 INFO: shutting down applications... 00:05:10.625 21:32:11 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:10.625 21:32:11 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:10.625 21:32:11 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:10.625 21:32:11 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:10.625 21:32:11 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:10.893 Calling clear_iscsi_subsystem 00:05:10.893 Calling clear_nvmf_subsystem 00:05:10.893 Calling clear_nbd_subsystem 00:05:10.893 Calling clear_ublk_subsystem 00:05:10.893 Calling clear_vhost_blk_subsystem 00:05:10.893 Calling clear_vhost_scsi_subsystem 00:05:10.893 Calling clear_bdev_subsystem 00:05:10.893 21:32:11 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:10.893 21:32:11 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:10.893 21:32:11 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:10.893 21:32:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.893 21:32:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:10.893 21:32:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:11.459 21:32:12 json_config -- json_config/json_config.sh@352 -- # break 00:05:11.459 21:32:12 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:11.459 21:32:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:11.459 21:32:12 json_config -- json_config/common.sh@31 -- # local app=target 00:05:11.460 21:32:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.460 21:32:12 json_config -- json_config/common.sh@35 -- # [[ -n 57492 ]] 00:05:11.460 21:32:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57492 00:05:11.460 21:32:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.460 21:32:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.460 21:32:12 json_config -- json_config/common.sh@41 -- # kill -0 57492 00:05:11.460 21:32:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.028 21:32:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.028 21:32:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.028 21:32:12 json_config -- json_config/common.sh@41 -- # kill -0 57492 00:05:12.028 21:32:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.028 21:32:12 json_config -- json_config/common.sh@43 -- # break 00:05:12.028 SPDK target shutdown done 00:05:12.028 INFO: relaunching applications... 00:05:12.028 21:32:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.028 21:32:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.028 21:32:12 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:12.028 21:32:12 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.028 21:32:12 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.028 21:32:12 json_config -- json_config/common.sh@10 -- # shift 00:05:12.028 21:32:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.028 21:32:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.028 21:32:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.028 21:32:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.028 21:32:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.028 21:32:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57693 00:05:12.028 21:32:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.028 21:32:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.028 Waiting for target to run... 00:05:12.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.028 21:32:12 json_config -- json_config/common.sh@25 -- # waitforlisten 57693 /var/tmp/spdk_tgt.sock 00:05:12.028 21:32:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 57693 ']' 00:05:12.028 21:32:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.028 21:32:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.028 21:32:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.028 21:32:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.028 21:32:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.028 [2024-12-10 21:32:12.603082] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:12.028 [2024-12-10 21:32:12.603417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57693 ] 00:05:12.288 [2024-12-10 21:32:12.922812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.288 [2024-12-10 21:32:12.950957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.563 [2024-12-10 21:32:13.084053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.563 [2024-12-10 21:32:13.289099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.563 [2024-12-10 21:32:13.321192] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.130 00:05:13.130 INFO: Checking if target configuration is the same... 00:05:13.130 21:32:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.130 21:32:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:13.130 21:32:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.130 21:32:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:13.130 21:32:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.130 21:32:13 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.130 21:32:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:13.130 21:32:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.130 + '[' 2 -ne 2 ']' 00:05:13.130 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:13.130 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:13.130 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:13.130 +++ basename /dev/fd/62 00:05:13.130 ++ mktemp /tmp/62.XXX 00:05:13.130 + tmp_file_1=/tmp/62.l6I 00:05:13.130 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.130 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.130 + tmp_file_2=/tmp/spdk_tgt_config.json.GSI 00:05:13.130 + ret=0 00:05:13.130 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:13.389 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:13.389 + diff -u /tmp/62.l6I /tmp/spdk_tgt_config.json.GSI 00:05:13.389 INFO: JSON config files are the same 00:05:13.389 + echo 'INFO: JSON config files are the same' 00:05:13.389 + rm /tmp/62.l6I /tmp/spdk_tgt_config.json.GSI 00:05:13.389 + exit 0 00:05:13.389 INFO: changing configuration and checking if this can be detected... 00:05:13.389 21:32:14 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:13.389 21:32:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:13.389 21:32:14 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.389 21:32:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.957 21:32:14 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.957 21:32:14 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:13.957 21:32:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.957 + '[' 2 -ne 2 ']' 00:05:13.957 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:13.957 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:13.957 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:13.957 +++ basename /dev/fd/62 00:05:13.957 ++ mktemp /tmp/62.XXX 00:05:13.957 + tmp_file_1=/tmp/62.zNt 00:05:13.957 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.957 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.957 + tmp_file_2=/tmp/spdk_tgt_config.json.Wnp 00:05:13.957 + ret=0 00:05:13.957 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.216 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.216 + diff -u /tmp/62.zNt /tmp/spdk_tgt_config.json.Wnp 00:05:14.216 + ret=1 00:05:14.216 + echo '=== Start of file: /tmp/62.zNt ===' 00:05:14.216 + cat /tmp/62.zNt 00:05:14.216 + echo '=== End of file: /tmp/62.zNt ===' 00:05:14.216 + echo '' 00:05:14.216 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Wnp ===' 00:05:14.216 + cat /tmp/spdk_tgt_config.json.Wnp 00:05:14.216 + echo '=== End of file: /tmp/spdk_tgt_config.json.Wnp ===' 00:05:14.216 + echo '' 00:05:14.216 + rm /tmp/62.zNt /tmp/spdk_tgt_config.json.Wnp 00:05:14.216 + exit 1 00:05:14.216 INFO: configuration change detected. 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:14.216 21:32:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.216 21:32:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 57693 ]] 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:14.216 21:32:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.216 21:32:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:14.216 21:32:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:14.216 21:32:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.216 21:32:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.473 21:32:15 json_config -- json_config/json_config.sh@330 -- # killprocess 57693 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@954 -- # '[' -z 57693 ']' 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@958 -- # kill -0 57693 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@959 -- # uname 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57693 00:05:14.473 killing process with pid 57693 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57693' 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@973 -- # kill 57693 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@978 -- # wait 57693 00:05:14.473 21:32:15 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.473 21:32:15 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.473 21:32:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.732 INFO: Success 00:05:14.732 21:32:15 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:14.732 21:32:15 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:14.732 00:05:14.732 real 0m9.292s 00:05:14.732 user 0m13.960s 00:05:14.732 sys 0m1.518s 00:05:14.732 ************************************ 00:05:14.732 END TEST json_config 00:05:14.732 ************************************ 00:05:14.732 21:32:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.732 21:32:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.732 21:32:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:14.732 21:32:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.732 21:32:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.733 21:32:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.733 ************************************ 00:05:14.733 START TEST json_config_extra_key 00:05:14.733 ************************************ 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 21:32:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.733 --rc genhtml_branch_coverage=1 00:05:14.733 --rc genhtml_function_coverage=1 00:05:14.733 --rc genhtml_legend=1 00:05:14.733 --rc geninfo_all_blocks=1 00:05:14.733 --rc geninfo_unexecuted_blocks=1 00:05:14.733 00:05:14.733 ' 00:05:14.733 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.733 21:32:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.733 21:32:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.733 21:32:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.733 21:32:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.733 21:32:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:14.733 21:32:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.733 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.733 21:32:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.734 21:32:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.734 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:14.994 INFO: launching applications... 00:05:14.994 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.994 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.994 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.994 21:32:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57847 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.994 Waiting for target to run... 00:05:14.994 21:32:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57847 /var/tmp/spdk_tgt.sock 00:05:14.994 21:32:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57847 ']' 00:05:14.994 21:32:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.994 21:32:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.994 21:32:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.994 21:32:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.994 21:32:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.994 [2024-12-10 21:32:15.574875] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:14.994 [2024-12-10 21:32:15.574969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57847 ] 00:05:15.254 [2024-12-10 21:32:15.874812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.254 [2024-12-10 21:32:15.911321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.254 [2024-12-10 21:32:15.941847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.199 00:05:16.199 INFO: shutting down applications... 00:05:16.199 21:32:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.199 21:32:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:16.199 21:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:16.199 21:32:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57847 ]] 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57847 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57847 00:05:16.199 21:32:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.458 21:32:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.458 21:32:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.458 21:32:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57847 00:05:16.458 21:32:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.458 21:32:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:16.458 SPDK target shutdown done 00:05:16.458 21:32:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.458 21:32:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.458 21:32:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:16.458 Success 00:05:16.458 00:05:16.458 real 0m1.878s 00:05:16.458 user 0m1.830s 00:05:16.458 sys 0m0.335s 00:05:16.458 21:32:17 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.458 ************************************ 00:05:16.458 END TEST json_config_extra_key 00:05:16.458 21:32:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.458 ************************************ 00:05:16.458 21:32:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.458 21:32:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.458 21:32:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.458 21:32:17 -- common/autotest_common.sh@10 -- # set +x 00:05:16.718 ************************************ 00:05:16.718 START TEST alias_rpc 00:05:16.718 ************************************ 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.718 * Looking for test storage... 00:05:16.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.718 21:32:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:16.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.718 --rc genhtml_branch_coverage=1 00:05:16.718 --rc genhtml_function_coverage=1 00:05:16.718 --rc genhtml_legend=1 00:05:16.718 --rc geninfo_all_blocks=1 00:05:16.718 --rc geninfo_unexecuted_blocks=1 00:05:16.718 00:05:16.718 ' 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.718 --rc genhtml_branch_coverage=1 00:05:16.718 --rc genhtml_function_coverage=1 00:05:16.718 --rc genhtml_legend=1 00:05:16.718 --rc geninfo_all_blocks=1 00:05:16.718 --rc geninfo_unexecuted_blocks=1 00:05:16.718 00:05:16.718 ' 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.718 --rc genhtml_branch_coverage=1 00:05:16.718 --rc genhtml_function_coverage=1 00:05:16.718 --rc genhtml_legend=1 00:05:16.718 --rc geninfo_all_blocks=1 00:05:16.718 --rc geninfo_unexecuted_blocks=1 00:05:16.718 00:05:16.718 ' 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:16.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.718 --rc genhtml_branch_coverage=1 00:05:16.718 --rc genhtml_function_coverage=1 00:05:16.718 --rc genhtml_legend=1 00:05:16.718 --rc geninfo_all_blocks=1 00:05:16.718 --rc geninfo_unexecuted_blocks=1 00:05:16.718 00:05:16.718 ' 00:05:16.718 21:32:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.718 21:32:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57920 00:05:16.718 21:32:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57920 00:05:16.718 21:32:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57920 ']' 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.718 21:32:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.978 [2024-12-10 21:32:17.501218] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:16.978 [2024-12-10 21:32:17.501563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57920 ] 00:05:16.978 [2024-12-10 21:32:17.652150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.978 [2024-12-10 21:32:17.692955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.978 [2024-12-10 21:32:17.738574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.238 21:32:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.238 21:32:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.238 21:32:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:17.497 21:32:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57920 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57920 ']' 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57920 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57920 00:05:17.497 killing process with pid 57920 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57920' 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 57920 00:05:17.497 21:32:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 57920 00:05:17.756 ************************************ 00:05:17.756 END TEST alias_rpc 00:05:17.756 ************************************ 00:05:17.756 00:05:17.756 real 0m1.209s 00:05:17.756 user 0m1.380s 00:05:17.756 sys 0m0.339s 00:05:17.756 21:32:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.756 21:32:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 21:32:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:17.756 21:32:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:17.756 21:32:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.756 21:32:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.756 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 ************************************ 00:05:17.756 START TEST spdkcli_tcp 00:05:17.756 ************************************ 00:05:17.756 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:18.016 * Looking for test storage... 00:05:18.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:18.016 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.016 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.016 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.016 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.016 21:32:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:18.016 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.016 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.016 --rc genhtml_branch_coverage=1 00:05:18.016 --rc genhtml_function_coverage=1 00:05:18.016 --rc genhtml_legend=1 00:05:18.017 --rc geninfo_all_blocks=1 00:05:18.017 --rc geninfo_unexecuted_blocks=1 00:05:18.017 00:05:18.017 ' 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.017 --rc genhtml_branch_coverage=1 00:05:18.017 --rc genhtml_function_coverage=1 00:05:18.017 --rc genhtml_legend=1 00:05:18.017 --rc geninfo_all_blocks=1 00:05:18.017 --rc geninfo_unexecuted_blocks=1 00:05:18.017 00:05:18.017 ' 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.017 --rc genhtml_branch_coverage=1 00:05:18.017 --rc genhtml_function_coverage=1 00:05:18.017 --rc genhtml_legend=1 00:05:18.017 --rc geninfo_all_blocks=1 00:05:18.017 --rc geninfo_unexecuted_blocks=1 00:05:18.017 00:05:18.017 ' 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.017 --rc genhtml_branch_coverage=1 00:05:18.017 --rc genhtml_function_coverage=1 00:05:18.017 --rc genhtml_legend=1 00:05:18.017 --rc geninfo_all_blocks=1 00:05:18.017 --rc geninfo_unexecuted_blocks=1 00:05:18.017 00:05:18.017 ' 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57996 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.017 21:32:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57996 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57996 ']' 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.017 21:32:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.017 [2024-12-10 21:32:18.772719] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:18.017 [2024-12-10 21:32:18.772992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57996 ] 00:05:18.276 [2024-12-10 21:32:18.918620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.276 [2024-12-10 21:32:18.954165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.276 [2024-12-10 21:32:18.954187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.276 [2024-12-10 21:32:18.995372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.535 21:32:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.535 21:32:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:18.535 21:32:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58006 00:05:18.535 21:32:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.535 21:32:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.809 [ 00:05:18.809 "bdev_malloc_delete", 00:05:18.809 "bdev_malloc_create", 00:05:18.809 "bdev_null_resize", 00:05:18.809 "bdev_null_delete", 00:05:18.809 "bdev_null_create", 00:05:18.809 "bdev_nvme_cuse_unregister", 00:05:18.809 "bdev_nvme_cuse_register", 00:05:18.809 "bdev_opal_new_user", 00:05:18.809 "bdev_opal_set_lock_state", 00:05:18.809 "bdev_opal_delete", 00:05:18.809 "bdev_opal_get_info", 00:05:18.809 "bdev_opal_create", 00:05:18.809 "bdev_nvme_opal_revert", 00:05:18.809 "bdev_nvme_opal_init", 00:05:18.809 "bdev_nvme_send_cmd", 00:05:18.809 "bdev_nvme_set_keys", 00:05:18.809 "bdev_nvme_get_path_iostat", 00:05:18.809 "bdev_nvme_get_mdns_discovery_info", 00:05:18.809 "bdev_nvme_stop_mdns_discovery", 00:05:18.809 "bdev_nvme_start_mdns_discovery", 00:05:18.809 "bdev_nvme_set_multipath_policy", 00:05:18.809 "bdev_nvme_set_preferred_path", 00:05:18.809 "bdev_nvme_get_io_paths", 00:05:18.809 "bdev_nvme_remove_error_injection", 00:05:18.809 "bdev_nvme_add_error_injection", 00:05:18.809 "bdev_nvme_get_discovery_info", 00:05:18.809 "bdev_nvme_stop_discovery", 00:05:18.809 "bdev_nvme_start_discovery", 00:05:18.809 "bdev_nvme_get_controller_health_info", 00:05:18.809 "bdev_nvme_disable_controller", 00:05:18.809 "bdev_nvme_enable_controller", 00:05:18.809 "bdev_nvme_reset_controller", 00:05:18.809 "bdev_nvme_get_transport_statistics", 00:05:18.809 "bdev_nvme_apply_firmware", 00:05:18.809 "bdev_nvme_detach_controller", 00:05:18.809 "bdev_nvme_get_controllers", 00:05:18.809 "bdev_nvme_attach_controller", 00:05:18.809 "bdev_nvme_set_hotplug", 00:05:18.809 "bdev_nvme_set_options", 00:05:18.809 "bdev_passthru_delete", 00:05:18.809 "bdev_passthru_create", 00:05:18.809 "bdev_lvol_set_parent_bdev", 00:05:18.809 "bdev_lvol_set_parent", 00:05:18.809 "bdev_lvol_check_shallow_copy", 00:05:18.809 "bdev_lvol_start_shallow_copy", 00:05:18.809 "bdev_lvol_grow_lvstore", 00:05:18.809 "bdev_lvol_get_lvols", 00:05:18.809 "bdev_lvol_get_lvstores", 00:05:18.809 "bdev_lvol_delete", 00:05:18.809 "bdev_lvol_set_read_only", 00:05:18.809 "bdev_lvol_resize", 00:05:18.809 "bdev_lvol_decouple_parent", 00:05:18.809 "bdev_lvol_inflate", 00:05:18.809 "bdev_lvol_rename", 00:05:18.809 "bdev_lvol_clone_bdev", 00:05:18.809 "bdev_lvol_clone", 00:05:18.809 "bdev_lvol_snapshot", 00:05:18.809 "bdev_lvol_create", 00:05:18.809 "bdev_lvol_delete_lvstore", 00:05:18.809 "bdev_lvol_rename_lvstore", 00:05:18.809 "bdev_lvol_create_lvstore", 00:05:18.809 "bdev_raid_set_options", 00:05:18.809 "bdev_raid_remove_base_bdev", 00:05:18.809 "bdev_raid_add_base_bdev", 00:05:18.809 "bdev_raid_delete", 00:05:18.809 "bdev_raid_create", 00:05:18.809 "bdev_raid_get_bdevs", 00:05:18.809 "bdev_error_inject_error", 00:05:18.809 "bdev_error_delete", 00:05:18.809 "bdev_error_create", 00:05:18.809 "bdev_split_delete", 00:05:18.809 "bdev_split_create", 00:05:18.809 "bdev_delay_delete", 00:05:18.809 "bdev_delay_create", 00:05:18.809 "bdev_delay_update_latency", 00:05:18.809 "bdev_zone_block_delete", 00:05:18.809 "bdev_zone_block_create", 00:05:18.809 "blobfs_create", 00:05:18.809 "blobfs_detect", 00:05:18.809 "blobfs_set_cache_size", 00:05:18.809 "bdev_aio_delete", 00:05:18.809 "bdev_aio_rescan", 00:05:18.809 "bdev_aio_create", 00:05:18.809 "bdev_ftl_set_property", 00:05:18.809 "bdev_ftl_get_properties", 00:05:18.809 "bdev_ftl_get_stats", 00:05:18.809 "bdev_ftl_unmap", 00:05:18.809 "bdev_ftl_unload", 00:05:18.809 "bdev_ftl_delete", 00:05:18.809 "bdev_ftl_load", 00:05:18.809 "bdev_ftl_create", 00:05:18.809 "bdev_virtio_attach_controller", 00:05:18.809 "bdev_virtio_scsi_get_devices", 00:05:18.809 "bdev_virtio_detach_controller", 00:05:18.809 "bdev_virtio_blk_set_hotplug", 00:05:18.809 "bdev_iscsi_delete", 00:05:18.809 "bdev_iscsi_create", 00:05:18.809 "bdev_iscsi_set_options", 00:05:18.809 "bdev_uring_delete", 00:05:18.809 "bdev_uring_rescan", 00:05:18.809 "bdev_uring_create", 00:05:18.809 "accel_error_inject_error", 00:05:18.809 "ioat_scan_accel_module", 00:05:18.809 "dsa_scan_accel_module", 00:05:18.809 "iaa_scan_accel_module", 00:05:18.809 "keyring_file_remove_key", 00:05:18.810 "keyring_file_add_key", 00:05:18.810 "keyring_linux_set_options", 00:05:18.810 "fsdev_aio_delete", 00:05:18.810 "fsdev_aio_create", 00:05:18.810 "iscsi_get_histogram", 00:05:18.810 "iscsi_enable_histogram", 00:05:18.810 "iscsi_set_options", 00:05:18.810 "iscsi_get_auth_groups", 00:05:18.810 "iscsi_auth_group_remove_secret", 00:05:18.810 "iscsi_auth_group_add_secret", 00:05:18.810 "iscsi_delete_auth_group", 00:05:18.810 "iscsi_create_auth_group", 00:05:18.810 "iscsi_set_discovery_auth", 00:05:18.810 "iscsi_get_options", 00:05:18.810 "iscsi_target_node_request_logout", 00:05:18.810 "iscsi_target_node_set_redirect", 00:05:18.810 "iscsi_target_node_set_auth", 00:05:18.810 "iscsi_target_node_add_lun", 00:05:18.810 "iscsi_get_stats", 00:05:18.810 "iscsi_get_connections", 00:05:18.810 "iscsi_portal_group_set_auth", 00:05:18.810 "iscsi_start_portal_group", 00:05:18.810 "iscsi_delete_portal_group", 00:05:18.810 "iscsi_create_portal_group", 00:05:18.810 "iscsi_get_portal_groups", 00:05:18.810 "iscsi_delete_target_node", 00:05:18.810 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.810 "iscsi_target_node_add_pg_ig_maps", 00:05:18.810 "iscsi_create_target_node", 00:05:18.810 "iscsi_get_target_nodes", 00:05:18.810 "iscsi_delete_initiator_group", 00:05:18.810 "iscsi_initiator_group_remove_initiators", 00:05:18.810 "iscsi_initiator_group_add_initiators", 00:05:18.810 "iscsi_create_initiator_group", 00:05:18.810 "iscsi_get_initiator_groups", 00:05:18.810 "nvmf_set_crdt", 00:05:18.810 "nvmf_set_config", 00:05:18.810 "nvmf_set_max_subsystems", 00:05:18.810 "nvmf_stop_mdns_prr", 00:05:18.810 "nvmf_publish_mdns_prr", 00:05:18.810 "nvmf_subsystem_get_listeners", 00:05:18.810 "nvmf_subsystem_get_qpairs", 00:05:18.810 "nvmf_subsystem_get_controllers", 00:05:18.810 "nvmf_get_stats", 00:05:18.810 "nvmf_get_transports", 00:05:18.810 "nvmf_create_transport", 00:05:18.810 "nvmf_get_targets", 00:05:18.810 "nvmf_delete_target", 00:05:18.810 "nvmf_create_target", 00:05:18.810 "nvmf_subsystem_allow_any_host", 00:05:18.810 "nvmf_subsystem_set_keys", 00:05:18.810 "nvmf_subsystem_remove_host", 00:05:18.810 "nvmf_subsystem_add_host", 00:05:18.810 "nvmf_ns_remove_host", 00:05:18.810 "nvmf_ns_add_host", 00:05:18.810 "nvmf_subsystem_remove_ns", 00:05:18.810 "nvmf_subsystem_set_ns_ana_group", 00:05:18.810 "nvmf_subsystem_add_ns", 00:05:18.810 "nvmf_subsystem_listener_set_ana_state", 00:05:18.810 "nvmf_discovery_get_referrals", 00:05:18.810 "nvmf_discovery_remove_referral", 00:05:18.810 "nvmf_discovery_add_referral", 00:05:18.810 "nvmf_subsystem_remove_listener", 00:05:18.810 "nvmf_subsystem_add_listener", 00:05:18.810 "nvmf_delete_subsystem", 00:05:18.810 "nvmf_create_subsystem", 00:05:18.810 "nvmf_get_subsystems", 00:05:18.810 "env_dpdk_get_mem_stats", 00:05:18.810 "nbd_get_disks", 00:05:18.810 "nbd_stop_disk", 00:05:18.810 "nbd_start_disk", 00:05:18.810 "ublk_recover_disk", 00:05:18.810 "ublk_get_disks", 00:05:18.810 "ublk_stop_disk", 00:05:18.810 "ublk_start_disk", 00:05:18.810 "ublk_destroy_target", 00:05:18.810 "ublk_create_target", 00:05:18.810 "virtio_blk_create_transport", 00:05:18.810 "virtio_blk_get_transports", 00:05:18.810 "vhost_controller_set_coalescing", 00:05:18.810 "vhost_get_controllers", 00:05:18.810 "vhost_delete_controller", 00:05:18.810 "vhost_create_blk_controller", 00:05:18.810 "vhost_scsi_controller_remove_target", 00:05:18.810 "vhost_scsi_controller_add_target", 00:05:18.810 "vhost_start_scsi_controller", 00:05:18.810 "vhost_create_scsi_controller", 00:05:18.810 "thread_set_cpumask", 00:05:18.810 "scheduler_set_options", 00:05:18.810 "framework_get_governor", 00:05:18.810 "framework_get_scheduler", 00:05:18.810 "framework_set_scheduler", 00:05:18.810 "framework_get_reactors", 00:05:18.810 "thread_get_io_channels", 00:05:18.810 "thread_get_pollers", 00:05:18.810 "thread_get_stats", 00:05:18.810 "framework_monitor_context_switch", 00:05:18.810 "spdk_kill_instance", 00:05:18.810 "log_enable_timestamps", 00:05:18.810 "log_get_flags", 00:05:18.810 "log_clear_flag", 00:05:18.810 "log_set_flag", 00:05:18.810 "log_get_level", 00:05:18.810 "log_set_level", 00:05:18.810 "log_get_print_level", 00:05:18.810 "log_set_print_level", 00:05:18.810 "framework_enable_cpumask_locks", 00:05:18.810 "framework_disable_cpumask_locks", 00:05:18.810 "framework_wait_init", 00:05:18.810 "framework_start_init", 00:05:18.810 "scsi_get_devices", 00:05:18.810 "bdev_get_histogram", 00:05:18.810 "bdev_enable_histogram", 00:05:18.810 "bdev_set_qos_limit", 00:05:18.810 "bdev_set_qd_sampling_period", 00:05:18.810 "bdev_get_bdevs", 00:05:18.810 "bdev_reset_iostat", 00:05:18.810 "bdev_get_iostat", 00:05:18.810 "bdev_examine", 00:05:18.810 "bdev_wait_for_examine", 00:05:18.810 "bdev_set_options", 00:05:18.810 "accel_get_stats", 00:05:18.810 "accel_set_options", 00:05:18.810 "accel_set_driver", 00:05:18.810 "accel_crypto_key_destroy", 00:05:18.810 "accel_crypto_keys_get", 00:05:18.810 "accel_crypto_key_create", 00:05:18.810 "accel_assign_opc", 00:05:18.810 "accel_get_module_info", 00:05:18.810 "accel_get_opc_assignments", 00:05:18.810 "vmd_rescan", 00:05:18.810 "vmd_remove_device", 00:05:18.810 "vmd_enable", 00:05:18.810 "sock_get_default_impl", 00:05:18.810 "sock_set_default_impl", 00:05:18.810 "sock_impl_set_options", 00:05:18.810 "sock_impl_get_options", 00:05:18.810 "iobuf_get_stats", 00:05:18.810 "iobuf_set_options", 00:05:18.810 "keyring_get_keys", 00:05:18.810 "framework_get_pci_devices", 00:05:18.810 "framework_get_config", 00:05:18.810 "framework_get_subsystems", 00:05:18.810 "fsdev_set_opts", 00:05:18.810 "fsdev_get_opts", 00:05:18.810 "trace_get_info", 00:05:18.810 "trace_get_tpoint_group_mask", 00:05:18.810 "trace_disable_tpoint_group", 00:05:18.810 "trace_enable_tpoint_group", 00:05:18.810 "trace_clear_tpoint_mask", 00:05:18.810 "trace_set_tpoint_mask", 00:05:18.810 "notify_get_notifications", 00:05:18.810 "notify_get_types", 00:05:18.810 "spdk_get_version", 00:05:18.810 "rpc_get_methods" 00:05:18.810 ] 00:05:18.810 21:32:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.810 21:32:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.810 21:32:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57996 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57996 ']' 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57996 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57996 00:05:18.810 killing process with pid 57996 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57996' 00:05:18.810 21:32:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57996 00:05:18.811 21:32:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57996 00:05:19.109 ************************************ 00:05:19.109 END TEST spdkcli_tcp 00:05:19.109 ************************************ 00:05:19.109 00:05:19.109 real 0m1.273s 00:05:19.109 user 0m2.300s 00:05:19.109 sys 0m0.372s 00:05:19.109 21:32:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.109 21:32:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.109 21:32:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.109 21:32:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.109 21:32:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.109 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.109 ************************************ 00:05:19.109 START TEST dpdk_mem_utility 00:05:19.109 ************************************ 00:05:19.109 21:32:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.368 * Looking for test storage... 00:05:19.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:19.368 21:32:19 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.368 21:32:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.368 21:32:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.368 21:32:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.368 21:32:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.368 21:32:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:19.368 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.368 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.369 --rc genhtml_branch_coverage=1 00:05:19.369 --rc genhtml_function_coverage=1 00:05:19.369 --rc genhtml_legend=1 00:05:19.369 --rc geninfo_all_blocks=1 00:05:19.369 --rc geninfo_unexecuted_blocks=1 00:05:19.369 00:05:19.369 ' 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.369 --rc genhtml_branch_coverage=1 00:05:19.369 --rc genhtml_function_coverage=1 00:05:19.369 --rc genhtml_legend=1 00:05:19.369 --rc geninfo_all_blocks=1 00:05:19.369 --rc geninfo_unexecuted_blocks=1 00:05:19.369 00:05:19.369 ' 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.369 --rc genhtml_branch_coverage=1 00:05:19.369 --rc genhtml_function_coverage=1 00:05:19.369 --rc genhtml_legend=1 00:05:19.369 --rc geninfo_all_blocks=1 00:05:19.369 --rc geninfo_unexecuted_blocks=1 00:05:19.369 00:05:19.369 ' 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.369 --rc genhtml_branch_coverage=1 00:05:19.369 --rc genhtml_function_coverage=1 00:05:19.369 --rc genhtml_legend=1 00:05:19.369 --rc geninfo_all_blocks=1 00:05:19.369 --rc geninfo_unexecuted_blocks=1 00:05:19.369 00:05:19.369 ' 00:05:19.369 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:19.369 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58088 00:05:19.369 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.369 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58088 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58088 ']' 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.369 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.369 [2024-12-10 21:32:20.075258] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:19.369 [2024-12-10 21:32:20.075561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58088 ] 00:05:19.628 [2024-12-10 21:32:20.226987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.628 [2024-12-10 21:32:20.267915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.628 [2024-12-10 21:32:20.314071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.888 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.888 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:19.888 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:19.888 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:19.888 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.888 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.888 { 00:05:19.888 "filename": "/tmp/spdk_mem_dump.txt" 00:05:19.888 } 00:05:19.888 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.888 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:19.888 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:19.888 1 heaps totaling size 818.000000 MiB 00:05:19.888 size: 818.000000 MiB heap id: 0 00:05:19.888 end heaps---------- 00:05:19.888 9 mempools totaling size 603.782043 MiB 00:05:19.888 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:19.888 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:19.888 size: 100.555481 MiB name: bdev_io_58088 00:05:19.888 size: 50.003479 MiB name: msgpool_58088 00:05:19.888 size: 36.509338 MiB name: fsdev_io_58088 00:05:19.888 size: 21.763794 MiB name: PDU_Pool 00:05:19.888 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:19.888 size: 4.133484 MiB name: evtpool_58088 00:05:19.888 size: 0.026123 MiB name: Session_Pool 00:05:19.888 end mempools------- 00:05:19.888 6 memzones totaling size 4.142822 MiB 00:05:19.888 size: 1.000366 MiB name: RG_ring_0_58088 00:05:19.888 size: 1.000366 MiB name: RG_ring_1_58088 00:05:19.888 size: 1.000366 MiB name: RG_ring_4_58088 00:05:19.888 size: 1.000366 MiB name: RG_ring_5_58088 00:05:19.888 size: 0.125366 MiB name: RG_ring_2_58088 00:05:19.888 size: 0.015991 MiB name: RG_ring_3_58088 00:05:19.888 end memzones------- 00:05:19.888 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:19.888 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:05:19.888 list of free elements. size: 10.802490 MiB 00:05:19.888 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:19.888 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:19.888 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:19.888 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:19.888 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:19.888 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:19.888 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:19.888 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:19.888 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:05:19.888 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:19.888 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:19.888 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:19.888 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:19.888 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:19.888 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:19.888 list of standard malloc elements. size: 199.268616 MiB 00:05:19.888 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:19.888 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:19.888 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:19.888 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:19.888 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:19.888 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:19.888 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:19.888 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:19.888 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:19.888 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:19.888 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:19.888 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:19.889 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:19.889 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:19.890 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:19.890 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:19.890 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:19.890 list of memzone associated elements. size: 607.928894 MiB 00:05:19.890 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:19.890 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:19.890 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:19.890 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:19.890 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:19.890 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58088_0 00:05:19.890 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:19.890 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58088_0 00:05:19.890 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:19.890 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58088_0 00:05:19.890 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:19.890 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:19.890 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:19.890 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:19.890 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:19.890 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58088_0 00:05:19.890 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:19.890 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58088 00:05:19.890 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:19.890 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58088 00:05:19.890 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:19.890 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:19.890 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:19.890 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:19.890 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:19.890 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:19.890 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:19.891 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:19.891 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:19.891 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58088 00:05:19.891 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:19.891 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58088 00:05:19.891 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:19.891 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58088 00:05:19.891 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:19.891 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58088 00:05:19.891 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:19.891 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58088 00:05:19.891 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:19.891 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58088 00:05:19.891 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:19.891 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:19.891 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:19.891 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:19.891 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:19.891 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:19.891 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:19.891 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58088 00:05:19.891 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:19.891 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58088 00:05:19.891 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:19.891 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:19.891 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:19.891 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:19.891 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:19.891 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58088 00:05:19.891 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:19.891 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:19.891 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:19.891 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58088 00:05:19.891 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:19.891 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58088 00:05:19.891 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:19.891 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58088 00:05:19.891 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:19.891 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:19.891 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:19.891 21:32:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58088 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58088 ']' 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58088 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58088 00:05:19.891 killing process with pid 58088 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58088' 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58088 00:05:19.891 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58088 00:05:20.150 ************************************ 00:05:20.150 END TEST dpdk_mem_utility 00:05:20.150 ************************************ 00:05:20.150 00:05:20.150 real 0m1.071s 00:05:20.150 user 0m1.187s 00:05:20.150 sys 0m0.320s 00:05:20.150 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.150 21:32:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.408 21:32:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.408 21:32:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.408 21:32:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.408 21:32:20 -- common/autotest_common.sh@10 -- # set +x 00:05:20.408 ************************************ 00:05:20.408 START TEST event 00:05:20.408 ************************************ 00:05:20.408 21:32:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.408 * Looking for test storage... 00:05:20.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:20.408 21:32:21 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.408 21:32:21 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.408 21:32:21 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.408 21:32:21 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.408 21:32:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.408 21:32:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.408 21:32:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.408 21:32:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.408 21:32:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.408 21:32:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.408 21:32:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.408 21:32:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.408 21:32:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.408 21:32:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.408 21:32:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.408 21:32:21 event -- scripts/common.sh@344 -- # case "$op" in 00:05:20.408 21:32:21 event -- scripts/common.sh@345 -- # : 1 00:05:20.408 21:32:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.408 21:32:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.408 21:32:21 event -- scripts/common.sh@365 -- # decimal 1 00:05:20.408 21:32:21 event -- scripts/common.sh@353 -- # local d=1 00:05:20.408 21:32:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.408 21:32:21 event -- scripts/common.sh@355 -- # echo 1 00:05:20.408 21:32:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.408 21:32:21 event -- scripts/common.sh@366 -- # decimal 2 00:05:20.408 21:32:21 event -- scripts/common.sh@353 -- # local d=2 00:05:20.408 21:32:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.408 21:32:21 event -- scripts/common.sh@355 -- # echo 2 00:05:20.408 21:32:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.408 21:32:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.408 21:32:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.408 21:32:21 event -- scripts/common.sh@368 -- # return 0 00:05:20.408 21:32:21 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.408 21:32:21 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.408 --rc genhtml_branch_coverage=1 00:05:20.408 --rc genhtml_function_coverage=1 00:05:20.408 --rc genhtml_legend=1 00:05:20.408 --rc geninfo_all_blocks=1 00:05:20.408 --rc geninfo_unexecuted_blocks=1 00:05:20.408 00:05:20.408 ' 00:05:20.408 21:32:21 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.408 --rc genhtml_branch_coverage=1 00:05:20.408 --rc genhtml_function_coverage=1 00:05:20.409 --rc genhtml_legend=1 00:05:20.409 --rc geninfo_all_blocks=1 00:05:20.409 --rc geninfo_unexecuted_blocks=1 00:05:20.409 00:05:20.409 ' 00:05:20.409 21:32:21 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.409 --rc genhtml_branch_coverage=1 00:05:20.409 --rc genhtml_function_coverage=1 00:05:20.409 --rc genhtml_legend=1 00:05:20.409 --rc geninfo_all_blocks=1 00:05:20.409 --rc geninfo_unexecuted_blocks=1 00:05:20.409 00:05:20.409 ' 00:05:20.409 21:32:21 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.409 --rc genhtml_branch_coverage=1 00:05:20.409 --rc genhtml_function_coverage=1 00:05:20.409 --rc genhtml_legend=1 00:05:20.409 --rc geninfo_all_blocks=1 00:05:20.409 --rc geninfo_unexecuted_blocks=1 00:05:20.409 00:05:20.409 ' 00:05:20.409 21:32:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:20.409 21:32:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.409 21:32:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.409 21:32:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:20.409 21:32:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.409 21:32:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.409 ************************************ 00:05:20.409 START TEST event_perf 00:05:20.409 ************************************ 00:05:20.409 21:32:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.409 Running I/O for 1 seconds...[2024-12-10 21:32:21.160175] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:20.409 [2024-12-10 21:32:21.160410] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58160 ] 00:05:20.667 [2024-12-10 21:32:21.310576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.667 [2024-12-10 21:32:21.359806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.667 [2024-12-10 21:32:21.359956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.667 Running I/O for 1 seconds...[2024-12-10 21:32:21.360047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.667 [2024-12-10 21:32:21.360049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.041 00:05:22.041 lcore 0: 176337 00:05:22.041 lcore 1: 176336 00:05:22.041 lcore 2: 176337 00:05:22.041 lcore 3: 176337 00:05:22.041 done. 00:05:22.041 00:05:22.041 real 0m1.264s 00:05:22.041 user 0m4.082s 00:05:22.041 sys 0m0.050s 00:05:22.041 ************************************ 00:05:22.041 END TEST event_perf 00:05:22.041 ************************************ 00:05:22.041 21:32:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.041 21:32:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.041 21:32:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:22.041 21:32:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.041 21:32:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.041 21:32:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.041 ************************************ 00:05:22.041 START TEST event_reactor 00:05:22.041 ************************************ 00:05:22.041 21:32:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:22.041 [2024-12-10 21:32:22.472219] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:22.041 [2024-12-10 21:32:22.472734] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58193 ] 00:05:22.041 [2024-12-10 21:32:22.615677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.041 [2024-12-10 21:32:22.651613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.977 test_start 00:05:22.977 oneshot 00:05:22.977 tick 100 00:05:22.977 tick 100 00:05:22.977 tick 250 00:05:22.977 tick 100 00:05:22.977 tick 100 00:05:22.977 tick 100 00:05:22.977 tick 250 00:05:22.977 tick 500 00:05:22.977 tick 100 00:05:22.977 tick 100 00:05:22.977 tick 250 00:05:22.977 tick 100 00:05:22.977 tick 100 00:05:22.977 test_end 00:05:22.977 00:05:22.977 real 0m1.240s 00:05:22.977 user 0m1.101s 00:05:22.977 sys 0m0.033s 00:05:22.977 21:32:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.977 21:32:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.977 ************************************ 00:05:22.977 END TEST event_reactor 00:05:22.977 ************************************ 00:05:22.977 21:32:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.977 21:32:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.977 21:32:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.977 21:32:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.977 ************************************ 00:05:22.977 START TEST event_reactor_perf 00:05:22.977 ************************************ 00:05:22.977 21:32:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.236 [2024-12-10 21:32:23.766205] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:23.236 [2024-12-10 21:32:23.766290] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58227 ] 00:05:23.236 [2024-12-10 21:32:23.913238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.236 [2024-12-10 21:32:23.947192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.612 test_start 00:05:24.612 test_end 00:05:24.612 Performance: 353043 events per second 00:05:24.612 00:05:24.612 real 0m1.240s 00:05:24.612 user 0m1.097s 00:05:24.612 sys 0m0.038s 00:05:24.612 21:32:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.612 ************************************ 00:05:24.612 21:32:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.612 END TEST event_reactor_perf 00:05:24.612 ************************************ 00:05:24.612 21:32:25 event -- event/event.sh@49 -- # uname -s 00:05:24.612 21:32:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.612 21:32:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:24.612 21:32:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.612 21:32:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.612 21:32:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.612 ************************************ 00:05:24.612 START TEST event_scheduler 00:05:24.612 ************************************ 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:24.612 * Looking for test storage... 00:05:24.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.612 21:32:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.612 --rc genhtml_branch_coverage=1 00:05:24.612 --rc genhtml_function_coverage=1 00:05:24.612 --rc genhtml_legend=1 00:05:24.612 --rc geninfo_all_blocks=1 00:05:24.612 --rc geninfo_unexecuted_blocks=1 00:05:24.612 00:05:24.612 ' 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.612 --rc genhtml_branch_coverage=1 00:05:24.612 --rc genhtml_function_coverage=1 00:05:24.612 --rc genhtml_legend=1 00:05:24.612 --rc geninfo_all_blocks=1 00:05:24.612 --rc geninfo_unexecuted_blocks=1 00:05:24.612 00:05:24.612 ' 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.612 --rc genhtml_branch_coverage=1 00:05:24.612 --rc genhtml_function_coverage=1 00:05:24.612 --rc genhtml_legend=1 00:05:24.612 --rc geninfo_all_blocks=1 00:05:24.612 --rc geninfo_unexecuted_blocks=1 00:05:24.612 00:05:24.612 ' 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.612 --rc genhtml_branch_coverage=1 00:05:24.612 --rc genhtml_function_coverage=1 00:05:24.612 --rc genhtml_legend=1 00:05:24.612 --rc geninfo_all_blocks=1 00:05:24.612 --rc geninfo_unexecuted_blocks=1 00:05:24.612 00:05:24.612 ' 00:05:24.612 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.612 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58298 00:05:24.612 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.612 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.612 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58298 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.612 21:32:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.612 [2024-12-10 21:32:25.265365] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:24.612 [2024-12-10 21:32:25.265663] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:05:24.870 [2024-12-10 21:32:25.416874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.870 [2024-12-10 21:32:25.464196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.870 [2024-12-10 21:32:25.464349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.870 [2024-12-10 21:32:25.464464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.870 [2024-12-10 21:32:25.464478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.870 21:32:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.870 21:32:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:24.870 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.870 21:32:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.870 21:32:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.870 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.871 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.871 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.871 POWER: Cannot set governor of lcore 0 to performance 00:05:24.871 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.871 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.871 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.871 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.871 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:24.871 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:24.871 POWER: Unable to set Power Management Environment for lcore 0 00:05:24.871 [2024-12-10 21:32:25.553664] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:24.871 [2024-12-10 21:32:25.553681] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:24.871 [2024-12-10 21:32:25.553692] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:24.871 [2024-12-10 21:32:25.553713] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.871 [2024-12-10 21:32:25.553722] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.871 [2024-12-10 21:32:25.553731] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.871 21:32:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.871 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.871 21:32:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.871 21:32:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.871 [2024-12-10 21:32:25.594629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.871 [2024-12-10 21:32:25.617641] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.871 21:32:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.871 21:32:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.871 21:32:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.871 21:32:25 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.871 21:32:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.871 ************************************ 00:05:24.871 START TEST scheduler_create_thread 00:05:24.871 ************************************ 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.871 2 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.871 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 3 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 4 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 5 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 6 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 7 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 8 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 9 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 10 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.130 21:32:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.065 21:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.065 21:32:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.066 21:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.066 21:32:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.442 21:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.442 21:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.442 21:32:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.442 21:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.442 21:32:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.378 ************************************ 00:05:28.378 END TEST scheduler_create_thread 00:05:28.378 ************************************ 00:05:28.378 21:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.378 00:05:28.378 real 0m3.376s 00:05:28.378 user 0m0.020s 00:05:28.378 sys 0m0.005s 00:05:28.378 21:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.378 21:32:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.378 21:32:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.378 21:32:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58298 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58298 ']' 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58298 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58298 00:05:28.378 killing process with pid 58298 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58298' 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58298 00:05:28.378 21:32:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58298 00:05:28.637 [2024-12-10 21:32:29.383654] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.896 00:05:28.896 real 0m4.529s 00:05:28.896 user 0m7.895s 00:05:28.896 sys 0m0.328s 00:05:28.896 21:32:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.896 ************************************ 00:05:28.896 END TEST event_scheduler 00:05:28.896 ************************************ 00:05:28.896 21:32:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.896 21:32:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.896 21:32:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.896 21:32:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.896 21:32:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.896 21:32:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.896 ************************************ 00:05:28.896 START TEST app_repeat 00:05:28.896 ************************************ 00:05:28.896 21:32:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.896 Process app_repeat pid: 58390 00:05:28.896 spdk_app_start Round 0 00:05:28.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58390 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58390' 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.896 21:32:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58390 /var/tmp/spdk-nbd.sock 00:05:28.896 21:32:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58390 ']' 00:05:28.896 21:32:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.896 21:32:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.896 21:32:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.896 21:32:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.896 21:32:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.896 [2024-12-10 21:32:29.657324] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:28.896 [2024-12-10 21:32:29.657430] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58390 ] 00:05:29.154 [2024-12-10 21:32:29.801065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.154 [2024-12-10 21:32:29.836253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.154 [2024-12-10 21:32:29.836274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.154 [2024-12-10 21:32:29.866766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.154 21:32:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.154 21:32:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.154 21:32:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.412 Malloc0 00:05:29.670 21:32:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.928 Malloc1 00:05:29.928 21:32:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.928 21:32:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.186 /dev/nbd0 00:05:30.186 21:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.186 21:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.187 1+0 records in 00:05:30.187 1+0 records out 00:05:30.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284969 s, 14.4 MB/s 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.187 21:32:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.187 21:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.187 21:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.187 21:32:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.467 /dev/nbd1 00:05:30.467 21:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.467 21:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.467 21:32:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.467 1+0 records in 00:05:30.467 1+0 records out 00:05:30.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041595 s, 9.8 MB/s 00:05:30.726 21:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.726 21:32:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.726 21:32:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.726 21:32:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.726 21:32:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.726 21:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.726 21:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.726 21:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.726 21:32:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.726 21:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.726 21:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.726 { 00:05:30.726 "nbd_device": "/dev/nbd0", 00:05:30.726 "bdev_name": "Malloc0" 00:05:30.726 }, 00:05:30.726 { 00:05:30.726 "nbd_device": "/dev/nbd1", 00:05:30.726 "bdev_name": "Malloc1" 00:05:30.726 } 00:05:30.726 ]' 00:05:30.726 21:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.726 { 00:05:30.726 "nbd_device": "/dev/nbd0", 00:05:30.726 "bdev_name": "Malloc0" 00:05:30.726 }, 00:05:30.726 { 00:05:30.726 "nbd_device": "/dev/nbd1", 00:05:30.727 "bdev_name": "Malloc1" 00:05:30.727 } 00:05:30.727 ]' 00:05:30.727 21:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.984 /dev/nbd1' 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.984 /dev/nbd1' 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.984 21:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.985 256+0 records in 00:05:30.985 256+0 records out 00:05:30.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103021 s, 102 MB/s 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.985 256+0 records in 00:05:30.985 256+0 records out 00:05:30.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027072 s, 38.7 MB/s 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.985 256+0 records in 00:05:30.985 256+0 records out 00:05:30.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296682 s, 35.3 MB/s 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.985 21:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.243 21:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.501 21:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.501 21:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.501 21:32:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.501 21:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.501 21:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.501 21:32:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.760 21:32:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.760 21:32:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.760 21:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.760 21:32:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.760 21:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.018 21:32:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.018 21:32:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.277 21:32:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.277 [2024-12-10 21:32:33.035255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.536 [2024-12-10 21:32:33.071678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.536 [2024-12-10 21:32:33.071690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.536 [2024-12-10 21:32:33.101867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.536 [2024-12-10 21:32:33.101935] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.536 [2024-12-10 21:32:33.101949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.828 spdk_app_start Round 1 00:05:35.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.828 21:32:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.828 21:32:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.828 21:32:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58390 /var/tmp/spdk-nbd.sock 00:05:35.828 21:32:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58390 ']' 00:05:35.828 21:32:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.828 21:32:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.828 21:32:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.828 21:32:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.828 21:32:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.828 21:32:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.828 21:32:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.828 21:32:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.828 Malloc0 00:05:35.828 21:32:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.086 Malloc1 00:05:36.086 21:32:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.086 21:32:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.652 /dev/nbd0 00:05:36.652 21:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.652 21:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.652 1+0 records in 00:05:36.652 1+0 records out 00:05:36.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334806 s, 12.2 MB/s 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.652 21:32:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.652 21:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.652 21:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.652 21:32:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.910 /dev/nbd1 00:05:36.910 21:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.910 21:32:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.910 1+0 records in 00:05:36.910 1+0 records out 00:05:36.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279168 s, 14.7 MB/s 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.910 21:32:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.910 21:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.910 21:32:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.910 21:32:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.910 21:32:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.910 21:32:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.169 21:32:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.169 { 00:05:37.169 "nbd_device": "/dev/nbd0", 00:05:37.169 "bdev_name": "Malloc0" 00:05:37.169 }, 00:05:37.169 { 00:05:37.169 "nbd_device": "/dev/nbd1", 00:05:37.169 "bdev_name": "Malloc1" 00:05:37.169 } 00:05:37.169 ]' 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.170 { 00:05:37.170 "nbd_device": "/dev/nbd0", 00:05:37.170 "bdev_name": "Malloc0" 00:05:37.170 }, 00:05:37.170 { 00:05:37.170 "nbd_device": "/dev/nbd1", 00:05:37.170 "bdev_name": "Malloc1" 00:05:37.170 } 00:05:37.170 ]' 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.170 /dev/nbd1' 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.170 /dev/nbd1' 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.170 256+0 records in 00:05:37.170 256+0 records out 00:05:37.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00701296 s, 150 MB/s 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.170 256+0 records in 00:05:37.170 256+0 records out 00:05:37.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238822 s, 43.9 MB/s 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.170 21:32:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.428 256+0 records in 00:05:37.428 256+0 records out 00:05:37.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275561 s, 38.1 MB/s 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.428 21:32:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.429 21:32:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.429 21:32:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.429 21:32:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.429 21:32:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.687 21:32:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.945 21:32:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.206 21:32:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.206 21:32:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.775 21:32:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.775 [2024-12-10 21:32:39.350242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.775 [2024-12-10 21:32:39.383256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.775 [2024-12-10 21:32:39.383267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.775 [2024-12-10 21:32:39.413945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.775 [2024-12-10 21:32:39.414040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.775 [2024-12-10 21:32:39.414054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.062 spdk_app_start Round 2 00:05:42.062 21:32:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.062 21:32:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:42.062 21:32:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58390 /var/tmp/spdk-nbd.sock 00:05:42.062 21:32:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58390 ']' 00:05:42.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.063 21:32:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.063 21:32:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.063 21:32:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.063 21:32:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.063 21:32:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.063 21:32:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.063 21:32:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:42.063 21:32:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.063 Malloc0 00:05:42.063 21:32:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.659 Malloc1 00:05:42.659 21:32:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.659 /dev/nbd0 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.659 21:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.659 21:32:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:42.659 21:32:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.659 21:32:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.918 1+0 records in 00:05:42.918 1+0 records out 00:05:42.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553571 s, 7.4 MB/s 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.918 21:32:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.918 21:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.918 21:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.918 21:32:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.175 /dev/nbd1 00:05:43.175 21:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.175 21:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.175 1+0 records in 00:05:43.175 1+0 records out 00:05:43.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214568 s, 19.1 MB/s 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.175 21:32:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.176 21:32:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.176 21:32:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.176 21:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.176 21:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.176 21:32:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.176 21:32:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.176 21:32:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.434 21:32:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.434 { 00:05:43.434 "nbd_device": "/dev/nbd0", 00:05:43.434 "bdev_name": "Malloc0" 00:05:43.434 }, 00:05:43.434 { 00:05:43.434 "nbd_device": "/dev/nbd1", 00:05:43.434 "bdev_name": "Malloc1" 00:05:43.434 } 00:05:43.434 ]' 00:05:43.434 21:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.434 { 00:05:43.434 "nbd_device": "/dev/nbd0", 00:05:43.434 "bdev_name": "Malloc0" 00:05:43.434 }, 00:05:43.434 { 00:05:43.434 "nbd_device": "/dev/nbd1", 00:05:43.434 "bdev_name": "Malloc1" 00:05:43.434 } 00:05:43.434 ]' 00:05:43.434 21:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.434 21:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.434 /dev/nbd1' 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.692 /dev/nbd1' 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.692 256+0 records in 00:05:43.692 256+0 records out 00:05:43.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492694 s, 213 MB/s 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.692 256+0 records in 00:05:43.692 256+0 records out 00:05:43.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241643 s, 43.4 MB/s 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.692 256+0 records in 00:05:43.692 256+0 records out 00:05:43.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314552 s, 33.3 MB/s 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.692 21:32:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.693 21:32:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.950 21:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.950 21:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.950 21:32:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.951 21:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.951 21:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.951 21:32:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.951 21:32:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.951 21:32:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.951 21:32:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.951 21:32:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.209 21:32:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.469 21:32:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.727 21:32:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.727 21:32:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.986 21:32:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.246 [2024-12-10 21:32:45.791722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.246 [2024-12-10 21:32:45.825050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.246 [2024-12-10 21:32:45.825063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.246 [2024-12-10 21:32:45.855140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.246 [2024-12-10 21:32:45.855239] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.246 [2024-12-10 21:32:45.855253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.553 21:32:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58390 /var/tmp/spdk-nbd.sock 00:05:48.553 21:32:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58390 ']' 00:05:48.553 21:32:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.553 21:32:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.553 21:32:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.553 21:32:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.553 21:32:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.553 21:32:49 event.app_repeat -- event/event.sh@39 -- # killprocess 58390 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58390 ']' 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58390 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58390 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.553 killing process with pid 58390 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58390' 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58390 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58390 00:05:48.553 spdk_app_start is called in Round 0. 00:05:48.553 Shutdown signal received, stop current app iteration 00:05:48.553 Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 reinitialization... 00:05:48.553 spdk_app_start is called in Round 1. 00:05:48.553 Shutdown signal received, stop current app iteration 00:05:48.553 Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 reinitialization... 00:05:48.553 spdk_app_start is called in Round 2. 00:05:48.553 Shutdown signal received, stop current app iteration 00:05:48.553 Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 reinitialization... 00:05:48.553 spdk_app_start is called in Round 3. 00:05:48.553 Shutdown signal received, stop current app iteration 00:05:48.553 21:32:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.553 21:32:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:48.553 00:05:48.553 real 0m19.575s 00:05:48.553 user 0m45.408s 00:05:48.553 sys 0m2.773s 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.553 21:32:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 ************************************ 00:05:48.553 END TEST app_repeat 00:05:48.553 ************************************ 00:05:48.553 21:32:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.553 21:32:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:48.553 21:32:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.553 21:32:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.553 21:32:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 ************************************ 00:05:48.553 START TEST cpu_locks 00:05:48.553 ************************************ 00:05:48.553 21:32:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:48.553 * Looking for test storage... 00:05:48.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.553 21:32:49 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.553 21:32:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.553 21:32:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.812 21:32:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.812 --rc genhtml_branch_coverage=1 00:05:48.812 --rc genhtml_function_coverage=1 00:05:48.812 --rc genhtml_legend=1 00:05:48.812 --rc geninfo_all_blocks=1 00:05:48.812 --rc geninfo_unexecuted_blocks=1 00:05:48.812 00:05:48.812 ' 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.812 --rc genhtml_branch_coverage=1 00:05:48.812 --rc genhtml_function_coverage=1 00:05:48.812 --rc genhtml_legend=1 00:05:48.812 --rc geninfo_all_blocks=1 00:05:48.812 --rc geninfo_unexecuted_blocks=1 00:05:48.812 00:05:48.812 ' 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.812 --rc genhtml_branch_coverage=1 00:05:48.812 --rc genhtml_function_coverage=1 00:05:48.812 --rc genhtml_legend=1 00:05:48.812 --rc geninfo_all_blocks=1 00:05:48.812 --rc geninfo_unexecuted_blocks=1 00:05:48.812 00:05:48.812 ' 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.812 --rc genhtml_branch_coverage=1 00:05:48.812 --rc genhtml_function_coverage=1 00:05:48.812 --rc genhtml_legend=1 00:05:48.812 --rc geninfo_all_blocks=1 00:05:48.812 --rc geninfo_unexecuted_blocks=1 00:05:48.812 00:05:48.812 ' 00:05:48.812 21:32:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.812 21:32:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.812 21:32:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.812 21:32:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.812 21:32:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.812 ************************************ 00:05:48.812 START TEST default_locks 00:05:48.812 ************************************ 00:05:48.812 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:48.812 21:32:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58839 00:05:48.812 21:32:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58839 00:05:48.812 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58839 ']' 00:05:48.812 21:32:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.813 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.813 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.813 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.813 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.813 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.813 [2024-12-10 21:32:49.508122] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:48.813 [2024-12-10 21:32:49.508221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58839 ] 00:05:49.071 [2024-12-10 21:32:49.654000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.071 [2024-12-10 21:32:49.695247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.071 [2024-12-10 21:32:49.742898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.329 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.329 21:32:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:49.329 21:32:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58839 00:05:49.329 21:32:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.329 21:32:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58839 00:05:49.587 21:32:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58839 00:05:49.587 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58839 ']' 00:05:49.587 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58839 00:05:49.587 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.587 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.587 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58839 00:05:49.845 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.845 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.845 killing process with pid 58839 00:05:49.845 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58839' 00:05:49.845 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58839 00:05:49.845 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58839 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58839 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58839 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58839 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58839 ']' 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.104 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58839) - No such process 00:05:50.104 ERROR: process (pid: 58839) is no longer running 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.104 21:32:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.104 00:05:50.104 real 0m1.213s 00:05:50.104 user 0m1.322s 00:05:50.104 sys 0m0.484s 00:05:50.105 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.105 21:32:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.105 ************************************ 00:05:50.105 END TEST default_locks 00:05:50.105 ************************************ 00:05:50.105 21:32:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.105 21:32:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.105 21:32:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.105 21:32:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.105 ************************************ 00:05:50.105 START TEST default_locks_via_rpc 00:05:50.105 ************************************ 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58879 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58879 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58879 ']' 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.105 21:32:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.105 [2024-12-10 21:32:50.777760] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:50.105 [2024-12-10 21:32:50.777882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:05:50.363 [2024-12-10 21:32:50.930750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.363 [2024-12-10 21:32:50.970153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.363 [2024-12-10 21:32:51.016894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58879 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58879 00:05:50.621 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.879 21:32:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58879 00:05:50.879 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58879 ']' 00:05:50.879 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58879 00:05:50.879 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.879 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.880 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58879 00:05:50.880 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.880 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.880 killing process with pid 58879 00:05:50.880 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58879' 00:05:50.880 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58879 00:05:50.880 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58879 00:05:51.137 00:05:51.138 real 0m1.203s 00:05:51.138 user 0m1.296s 00:05:51.138 sys 0m0.424s 00:05:51.138 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.138 21:32:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.138 ************************************ 00:05:51.138 END TEST default_locks_via_rpc 00:05:51.138 ************************************ 00:05:51.397 21:32:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.397 21:32:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.397 21:32:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.397 21:32:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.397 ************************************ 00:05:51.397 START TEST non_locking_app_on_locked_coremask 00:05:51.397 ************************************ 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58922 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58922 /var/tmp/spdk.sock 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58922 ']' 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.397 21:32:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.397 [2024-12-10 21:32:52.028475] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:51.397 [2024-12-10 21:32:52.028585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58922 ] 00:05:51.397 [2024-12-10 21:32:52.177575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.655 [2024-12-10 21:32:52.218520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.655 [2024-12-10 21:32:52.264004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58925 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58925 /var/tmp/spdk2.sock 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58925 ']' 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.655 21:32:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.969 [2024-12-10 21:32:52.472853] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:51.969 [2024-12-10 21:32:52.472958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:05:51.969 [2024-12-10 21:32:52.643859] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.969 [2024-12-10 21:32:52.643927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.969 [2024-12-10 21:32:52.712150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.227 [2024-12-10 21:32:52.797236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.794 21:32:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.794 21:32:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.794 21:32:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58922 00:05:52.794 21:32:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58922 00:05:52.794 21:32:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58922 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58922 ']' 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58922 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58922 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.730 killing process with pid 58922 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58922' 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58922 00:05:53.730 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58922 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58925 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58925 ']' 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58925 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58925 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.297 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.297 killing process with pid 58925 00:05:54.298 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58925' 00:05:54.298 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58925 00:05:54.298 21:32:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58925 00:05:54.559 00:05:54.559 real 0m3.166s 00:05:54.559 user 0m3.772s 00:05:54.559 sys 0m0.919s 00:05:54.559 21:32:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.559 21:32:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.559 ************************************ 00:05:54.559 END TEST non_locking_app_on_locked_coremask 00:05:54.559 ************************************ 00:05:54.559 21:32:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.559 21:32:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.559 21:32:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.559 21:32:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.559 ************************************ 00:05:54.559 START TEST locking_app_on_unlocked_coremask 00:05:54.559 ************************************ 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58992 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58992 /var/tmp/spdk.sock 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58992 ']' 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.559 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.559 [2024-12-10 21:32:55.232799] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:54.559 [2024-12-10 21:32:55.232898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58992 ] 00:05:54.817 [2024-12-10 21:32:55.381991] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.817 [2024-12-10 21:32:55.382058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.817 [2024-12-10 21:32:55.422437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.817 [2024-12-10 21:32:55.468893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59001 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59001 /var/tmp/spdk2.sock 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59001 ']' 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.076 21:32:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.076 [2024-12-10 21:32:55.672988] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:55.076 [2024-12-10 21:32:55.673595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:05:55.076 [2024-12-10 21:32:55.837106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.334 [2024-12-10 21:32:55.907596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.334 [2024-12-10 21:32:55.991368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.593 21:32:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.593 21:32:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.593 21:32:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59001 00:05:55.593 21:32:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59001 00:05:55.593 21:32:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58992 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58992 ']' 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58992 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58992 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.528 killing process with pid 58992 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58992' 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58992 00:05:56.528 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58992 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59001 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59001 ']' 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59001 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59001 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.095 killing process with pid 59001 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59001' 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59001 00:05:57.095 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59001 00:05:57.353 00:05:57.353 real 0m2.764s 00:05:57.353 user 0m3.140s 00:05:57.353 sys 0m0.890s 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.353 ************************************ 00:05:57.353 END TEST locking_app_on_unlocked_coremask 00:05:57.353 ************************************ 00:05:57.353 21:32:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.353 21:32:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.353 21:32:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.353 21:32:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.353 ************************************ 00:05:57.353 START TEST locking_app_on_locked_coremask 00:05:57.353 ************************************ 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59055 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59055 /var/tmp/spdk.sock 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59055 ']' 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.353 21:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.353 [2024-12-10 21:32:58.047616] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:57.354 [2024-12-10 21:32:58.047718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:05:57.612 [2024-12-10 21:32:58.194182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.612 [2024-12-10 21:32:58.230078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.612 [2024-12-10 21:32:58.271119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59063 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59063 /var/tmp/spdk2.sock 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59063 /var/tmp/spdk2.sock 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59063 /var/tmp/spdk2.sock 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59063 ']' 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.871 21:32:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.871 [2024-12-10 21:32:58.475431] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:57.871 [2024-12-10 21:32:58.475546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59063 ] 00:05:57.871 [2024-12-10 21:32:58.638623] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59055 has claimed it. 00:05:57.871 [2024-12-10 21:32:58.638693] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.806 ERROR: process (pid: 59063) is no longer running 00:05:58.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59063) - No such process 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59055 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59055 00:05:58.806 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59055 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59055 ']' 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59055 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59055 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.065 killing process with pid 59055 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59055' 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59055 00:05:59.065 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59055 00:05:59.326 00:05:59.326 real 0m1.989s 00:05:59.326 user 0m2.406s 00:05:59.326 sys 0m0.527s 00:05:59.326 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.326 21:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.326 ************************************ 00:05:59.326 END TEST locking_app_on_locked_coremask 00:05:59.326 ************************************ 00:05:59.326 21:33:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:59.326 21:33:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.326 21:33:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.326 21:33:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.326 ************************************ 00:05:59.326 START TEST locking_overlapped_coremask 00:05:59.326 ************************************ 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59109 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59109 /var/tmp/spdk.sock 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59109 ']' 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.326 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.326 [2024-12-10 21:33:00.093015] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:59.326 [2024-12-10 21:33:00.093111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59109 ] 00:05:59.594 [2024-12-10 21:33:00.244597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.594 [2024-12-10 21:33:00.287370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.594 [2024-12-10 21:33:00.287460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.594 [2024-12-10 21:33:00.287463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.594 [2024-12-10 21:33:00.332985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59119 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59119 /var/tmp/spdk2.sock 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59119 /var/tmp/spdk2.sock 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59119 /var/tmp/spdk2.sock 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59119 ']' 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.853 21:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.853 [2024-12-10 21:33:00.547551] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:05:59.853 [2024-12-10 21:33:00.547643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:06:00.111 [2024-12-10 21:33:00.717175] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59109 has claimed it. 00:06:00.111 [2024-12-10 21:33:00.717233] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.679 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59119) - No such process 00:06:00.679 ERROR: process (pid: 59119) is no longer running 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59109 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59109 ']' 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59109 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59109 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59109' 00:06:00.679 killing process with pid 59109 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59109 00:06:00.679 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59109 00:06:00.938 00:06:00.938 real 0m1.574s 00:06:00.938 user 0m4.394s 00:06:00.938 sys 0m0.315s 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.938 ************************************ 00:06:00.938 END TEST locking_overlapped_coremask 00:06:00.938 ************************************ 00:06:00.938 21:33:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.938 21:33:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.938 21:33:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.938 21:33:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.938 ************************************ 00:06:00.938 START TEST locking_overlapped_coremask_via_rpc 00:06:00.938 ************************************ 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59159 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59159 /var/tmp/spdk.sock 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59159 ']' 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.938 21:33:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.938 [2024-12-10 21:33:01.706375] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:00.938 [2024-12-10 21:33:01.706896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59159 ] 00:06:01.197 [2024-12-10 21:33:01.851891] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.197 [2024-12-10 21:33:01.851956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.197 [2024-12-10 21:33:01.887709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.197 [2024-12-10 21:33:01.887854] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.197 [2024-12-10 21:33:01.887858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.197 [2024-12-10 21:33:01.935113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59170 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59170 /var/tmp/spdk2.sock 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59170 ']' 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.455 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:01.455 [2024-12-10 21:33:02.138019] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:01.455 [2024-12-10 21:33:02.138113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59170 ] 00:06:01.714 [2024-12-10 21:33:02.311082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.714 [2024-12-10 21:33:02.311140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.714 [2024-12-10 21:33:02.400865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.714 [2024-12-10 21:33:02.404578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.714 [2024-12-10 21:33:02.404580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.714 [2024-12-10 21:33:02.485065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.972 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.973 [2024-12-10 21:33:02.734583] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59159 has claimed it. 00:06:01.973 request: 00:06:01.973 { 00:06:01.973 "method": "framework_enable_cpumask_locks", 00:06:01.973 "req_id": 1 00:06:01.973 } 00:06:01.973 Got JSON-RPC error response 00:06:01.973 response: 00:06:01.973 { 00:06:01.973 "code": -32603, 00:06:01.973 "message": "Failed to claim CPU core: 2" 00:06:01.973 } 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59159 /var/tmp/spdk.sock 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59159 ']' 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.973 21:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59170 /var/tmp/spdk2.sock 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59170 ']' 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.539 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.798 00:06:02.798 real 0m1.765s 00:06:02.798 user 0m1.194s 00:06:02.798 sys 0m0.139s 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.798 21:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.798 ************************************ 00:06:02.798 END TEST locking_overlapped_coremask_via_rpc 00:06:02.798 ************************************ 00:06:02.798 21:33:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.798 21:33:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59159 ]] 00:06:02.798 21:33:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59159 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59159 ']' 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59159 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59159 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.798 killing process with pid 59159 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59159' 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59159 00:06:02.798 21:33:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59159 00:06:03.056 21:33:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59170 ]] 00:06:03.056 21:33:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59170 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59170 ']' 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59170 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59170 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:03.056 killing process with pid 59170 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59170' 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59170 00:06:03.056 21:33:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59170 00:06:03.315 21:33:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.315 21:33:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:03.315 21:33:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59159 ]] 00:06:03.315 21:33:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59159 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59159 ']' 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59159 00:06:03.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59159) - No such process 00:06:03.315 Process with pid 59159 is not found 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59159 is not found' 00:06:03.315 21:33:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59170 ]] 00:06:03.315 21:33:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59170 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59170 ']' 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59170 00:06:03.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59170) - No such process 00:06:03.315 Process with pid 59170 is not found 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59170 is not found' 00:06:03.315 21:33:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.315 00:06:03.315 real 0m14.806s 00:06:03.315 user 0m26.516s 00:06:03.315 sys 0m4.360s 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.315 ************************************ 00:06:03.315 END TEST cpu_locks 00:06:03.315 21:33:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.315 ************************************ 00:06:03.315 00:06:03.315 real 0m43.133s 00:06:03.315 user 1m26.297s 00:06:03.315 sys 0m7.838s 00:06:03.315 21:33:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.315 21:33:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.315 ************************************ 00:06:03.315 END TEST event 00:06:03.315 ************************************ 00:06:03.574 21:33:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.574 21:33:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.574 21:33:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.574 21:33:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.574 ************************************ 00:06:03.574 START TEST thread 00:06:03.574 ************************************ 00:06:03.574 21:33:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.574 * Looking for test storage... 00:06:03.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:03.574 21:33:04 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.574 21:33:04 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.574 21:33:04 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.574 21:33:04 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.574 21:33:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.574 21:33:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.574 21:33:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.574 21:33:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.574 21:33:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.574 21:33:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.574 21:33:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.574 21:33:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.574 21:33:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.574 21:33:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.574 21:33:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.574 21:33:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:03.574 21:33:04 thread -- scripts/common.sh@345 -- # : 1 00:06:03.574 21:33:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.574 21:33:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.574 21:33:04 thread -- scripts/common.sh@365 -- # decimal 1 00:06:03.574 21:33:04 thread -- scripts/common.sh@353 -- # local d=1 00:06:03.574 21:33:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.574 21:33:04 thread -- scripts/common.sh@355 -- # echo 1 00:06:03.574 21:33:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.574 21:33:04 thread -- scripts/common.sh@366 -- # decimal 2 00:06:03.574 21:33:04 thread -- scripts/common.sh@353 -- # local d=2 00:06:03.574 21:33:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.574 21:33:04 thread -- scripts/common.sh@355 -- # echo 2 00:06:03.574 21:33:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.574 21:33:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.574 21:33:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.574 21:33:04 thread -- scripts/common.sh@368 -- # return 0 00:06:03.574 21:33:04 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.574 21:33:04 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.574 --rc genhtml_branch_coverage=1 00:06:03.574 --rc genhtml_function_coverage=1 00:06:03.574 --rc genhtml_legend=1 00:06:03.574 --rc geninfo_all_blocks=1 00:06:03.575 --rc geninfo_unexecuted_blocks=1 00:06:03.575 00:06:03.575 ' 00:06:03.575 21:33:04 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.575 --rc genhtml_branch_coverage=1 00:06:03.575 --rc genhtml_function_coverage=1 00:06:03.575 --rc genhtml_legend=1 00:06:03.575 --rc geninfo_all_blocks=1 00:06:03.575 --rc geninfo_unexecuted_blocks=1 00:06:03.575 00:06:03.575 ' 00:06:03.575 21:33:04 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.575 --rc genhtml_branch_coverage=1 00:06:03.575 --rc genhtml_function_coverage=1 00:06:03.575 --rc genhtml_legend=1 00:06:03.575 --rc geninfo_all_blocks=1 00:06:03.575 --rc geninfo_unexecuted_blocks=1 00:06:03.575 00:06:03.575 ' 00:06:03.575 21:33:04 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.575 --rc genhtml_branch_coverage=1 00:06:03.575 --rc genhtml_function_coverage=1 00:06:03.575 --rc genhtml_legend=1 00:06:03.575 --rc geninfo_all_blocks=1 00:06:03.575 --rc geninfo_unexecuted_blocks=1 00:06:03.575 00:06:03.575 ' 00:06:03.575 21:33:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.575 21:33:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:03.575 21:33:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.575 21:33:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.575 ************************************ 00:06:03.575 START TEST thread_poller_perf 00:06:03.575 ************************************ 00:06:03.575 21:33:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.575 [2024-12-10 21:33:04.327711] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:03.575 [2024-12-10 21:33:04.327847] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59293 ] 00:06:03.834 [2024-12-10 21:33:04.477640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.834 [2024-12-10 21:33:04.527588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.834 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:05.229 [2024-12-10T21:33:06.012Z] ====================================== 00:06:05.229 [2024-12-10T21:33:06.012Z] busy:2213741873 (cyc) 00:06:05.229 [2024-12-10T21:33:06.012Z] total_run_count: 288000 00:06:05.229 [2024-12-10T21:33:06.012Z] tsc_hz: 2200000000 (cyc) 00:06:05.229 [2024-12-10T21:33:06.012Z] ====================================== 00:06:05.229 [2024-12-10T21:33:06.012Z] poller_cost: 7686 (cyc), 3493 (nsec) 00:06:05.229 00:06:05.229 real 0m1.273s 00:06:05.229 user 0m1.122s 00:06:05.229 sys 0m0.041s 00:06:05.229 21:33:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.229 21:33:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.229 ************************************ 00:06:05.229 END TEST thread_poller_perf 00:06:05.229 ************************************ 00:06:05.229 21:33:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.229 21:33:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:05.229 21:33:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.229 21:33:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.229 ************************************ 00:06:05.229 START TEST thread_poller_perf 00:06:05.229 ************************************ 00:06:05.229 21:33:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.229 [2024-12-10 21:33:05.646478] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:05.229 [2024-12-10 21:33:05.646615] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59328 ] 00:06:05.229 [2024-12-10 21:33:05.790615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.229 [2024-12-10 21:33:05.836665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.229 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:06.164 [2024-12-10T21:33:06.947Z] ====================================== 00:06:06.164 [2024-12-10T21:33:06.947Z] busy:2202733638 (cyc) 00:06:06.164 [2024-12-10T21:33:06.947Z] total_run_count: 3583000 00:06:06.164 [2024-12-10T21:33:06.947Z] tsc_hz: 2200000000 (cyc) 00:06:06.164 [2024-12-10T21:33:06.947Z] ====================================== 00:06:06.164 [2024-12-10T21:33:06.947Z] poller_cost: 614 (cyc), 279 (nsec) 00:06:06.164 ************************************ 00:06:06.164 END TEST thread_poller_perf 00:06:06.164 ************************************ 00:06:06.164 00:06:06.164 real 0m1.250s 00:06:06.164 user 0m1.104s 00:06:06.164 sys 0m0.038s 00:06:06.164 21:33:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.164 21:33:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.164 21:33:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:06.164 ************************************ 00:06:06.164 END TEST thread 00:06:06.164 ************************************ 00:06:06.164 00:06:06.164 real 0m2.782s 00:06:06.164 user 0m2.360s 00:06:06.164 sys 0m0.204s 00:06:06.164 21:33:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.164 21:33:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.422 21:33:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:06.422 21:33:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.422 21:33:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.422 21:33:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.422 21:33:06 -- common/autotest_common.sh@10 -- # set +x 00:06:06.422 ************************************ 00:06:06.422 START TEST app_cmdline 00:06:06.422 ************************************ 00:06:06.422 21:33:06 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.422 * Looking for test storage... 00:06:06.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.422 21:33:07 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.422 21:33:07 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.422 21:33:07 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.422 21:33:07 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.422 21:33:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:06.422 21:33:07 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.422 21:33:07 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.423 --rc genhtml_branch_coverage=1 00:06:06.423 --rc genhtml_function_coverage=1 00:06:06.423 --rc genhtml_legend=1 00:06:06.423 --rc geninfo_all_blocks=1 00:06:06.423 --rc geninfo_unexecuted_blocks=1 00:06:06.423 00:06:06.423 ' 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.423 --rc genhtml_branch_coverage=1 00:06:06.423 --rc genhtml_function_coverage=1 00:06:06.423 --rc genhtml_legend=1 00:06:06.423 --rc geninfo_all_blocks=1 00:06:06.423 --rc geninfo_unexecuted_blocks=1 00:06:06.423 00:06:06.423 ' 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.423 --rc genhtml_branch_coverage=1 00:06:06.423 --rc genhtml_function_coverage=1 00:06:06.423 --rc genhtml_legend=1 00:06:06.423 --rc geninfo_all_blocks=1 00:06:06.423 --rc geninfo_unexecuted_blocks=1 00:06:06.423 00:06:06.423 ' 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.423 --rc genhtml_branch_coverage=1 00:06:06.423 --rc genhtml_function_coverage=1 00:06:06.423 --rc genhtml_legend=1 00:06:06.423 --rc geninfo_all_blocks=1 00:06:06.423 --rc geninfo_unexecuted_blocks=1 00:06:06.423 00:06:06.423 ' 00:06:06.423 21:33:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:06.423 21:33:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59411 00:06:06.423 21:33:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:06.423 21:33:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59411 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59411 ']' 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.423 21:33:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.423 [2024-12-10 21:33:07.184108] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:06.423 [2024-12-10 21:33:07.184218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59411 ] 00:06:06.681 [2024-12-10 21:33:07.328935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.681 [2024-12-10 21:33:07.364089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.681 [2024-12-10 21:33:07.405703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.939 21:33:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.939 21:33:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:06.939 21:33:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:07.198 { 00:06:07.198 "version": "SPDK v25.01-pre git sha1 626389917", 00:06:07.198 "fields": { 00:06:07.198 "major": 25, 00:06:07.198 "minor": 1, 00:06:07.198 "patch": 0, 00:06:07.198 "suffix": "-pre", 00:06:07.198 "commit": "626389917" 00:06:07.198 } 00:06:07.198 } 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:07.198 21:33:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:07.198 21:33:07 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.455 request: 00:06:07.455 { 00:06:07.455 "method": "env_dpdk_get_mem_stats", 00:06:07.455 "req_id": 1 00:06:07.455 } 00:06:07.455 Got JSON-RPC error response 00:06:07.455 response: 00:06:07.455 { 00:06:07.455 "code": -32601, 00:06:07.455 "message": "Method not found" 00:06:07.455 } 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.713 21:33:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59411 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59411 ']' 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59411 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59411 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.713 killing process with pid 59411 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59411' 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 59411 00:06:07.713 21:33:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 59411 00:06:07.971 00:06:07.971 real 0m1.584s 00:06:07.971 user 0m2.167s 00:06:07.971 sys 0m0.379s 00:06:07.971 21:33:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.971 21:33:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.971 ************************************ 00:06:07.971 END TEST app_cmdline 00:06:07.971 ************************************ 00:06:07.971 21:33:08 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.971 21:33:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.971 21:33:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.971 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:06:07.971 ************************************ 00:06:07.971 START TEST version 00:06:07.971 ************************************ 00:06:07.971 21:33:08 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.971 * Looking for test storage... 00:06:07.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.971 21:33:08 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.971 21:33:08 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.971 21:33:08 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.230 21:33:08 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.230 21:33:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.230 21:33:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.230 21:33:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.230 21:33:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.230 21:33:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.230 21:33:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.230 21:33:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.230 21:33:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.230 21:33:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.230 21:33:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.230 21:33:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.230 21:33:08 version -- scripts/common.sh@344 -- # case "$op" in 00:06:08.230 21:33:08 version -- scripts/common.sh@345 -- # : 1 00:06:08.230 21:33:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.230 21:33:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.230 21:33:08 version -- scripts/common.sh@365 -- # decimal 1 00:06:08.230 21:33:08 version -- scripts/common.sh@353 -- # local d=1 00:06:08.230 21:33:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.230 21:33:08 version -- scripts/common.sh@355 -- # echo 1 00:06:08.230 21:33:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.230 21:33:08 version -- scripts/common.sh@366 -- # decimal 2 00:06:08.230 21:33:08 version -- scripts/common.sh@353 -- # local d=2 00:06:08.230 21:33:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.230 21:33:08 version -- scripts/common.sh@355 -- # echo 2 00:06:08.230 21:33:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.230 21:33:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.230 21:33:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.230 21:33:08 version -- scripts/common.sh@368 -- # return 0 00:06:08.230 21:33:08 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.230 21:33:08 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.230 --rc genhtml_branch_coverage=1 00:06:08.230 --rc genhtml_function_coverage=1 00:06:08.230 --rc genhtml_legend=1 00:06:08.230 --rc geninfo_all_blocks=1 00:06:08.230 --rc geninfo_unexecuted_blocks=1 00:06:08.230 00:06:08.230 ' 00:06:08.230 21:33:08 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.230 --rc genhtml_branch_coverage=1 00:06:08.230 --rc genhtml_function_coverage=1 00:06:08.230 --rc genhtml_legend=1 00:06:08.230 --rc geninfo_all_blocks=1 00:06:08.230 --rc geninfo_unexecuted_blocks=1 00:06:08.230 00:06:08.230 ' 00:06:08.230 21:33:08 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.230 --rc genhtml_branch_coverage=1 00:06:08.230 --rc genhtml_function_coverage=1 00:06:08.230 --rc genhtml_legend=1 00:06:08.230 --rc geninfo_all_blocks=1 00:06:08.230 --rc geninfo_unexecuted_blocks=1 00:06:08.230 00:06:08.230 ' 00:06:08.230 21:33:08 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.230 --rc genhtml_branch_coverage=1 00:06:08.230 --rc genhtml_function_coverage=1 00:06:08.230 --rc genhtml_legend=1 00:06:08.230 --rc geninfo_all_blocks=1 00:06:08.230 --rc geninfo_unexecuted_blocks=1 00:06:08.230 00:06:08.230 ' 00:06:08.230 21:33:08 version -- app/version.sh@17 -- # get_header_version major 00:06:08.230 21:33:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # cut -f2 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.230 21:33:08 version -- app/version.sh@17 -- # major=25 00:06:08.230 21:33:08 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.230 21:33:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # cut -f2 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.230 21:33:08 version -- app/version.sh@18 -- # minor=1 00:06:08.230 21:33:08 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.230 21:33:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # cut -f2 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.230 21:33:08 version -- app/version.sh@19 -- # patch=0 00:06:08.230 21:33:08 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.230 21:33:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # cut -f2 00:06:08.230 21:33:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.230 21:33:08 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.230 21:33:08 version -- app/version.sh@22 -- # version=25.1 00:06:08.230 21:33:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.230 21:33:08 version -- app/version.sh@28 -- # version=25.1rc0 00:06:08.230 21:33:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:08.230 21:33:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.230 21:33:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:08.230 21:33:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:08.230 ************************************ 00:06:08.230 END TEST version 00:06:08.230 ************************************ 00:06:08.230 00:06:08.230 real 0m0.258s 00:06:08.230 user 0m0.170s 00:06:08.230 sys 0m0.120s 00:06:08.230 21:33:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.230 21:33:08 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.230 21:33:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:08.230 21:33:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:08.230 21:33:08 -- spdk/autotest.sh@194 -- # uname -s 00:06:08.230 21:33:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:08.230 21:33:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:08.230 21:33:08 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:08.230 21:33:08 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:08.230 21:33:08 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:08.230 21:33:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.231 21:33:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.231 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:06:08.231 ************************************ 00:06:08.231 START TEST spdk_dd 00:06:08.231 ************************************ 00:06:08.231 21:33:08 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:08.231 * Looking for test storage... 00:06:08.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:08.231 21:33:08 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.231 21:33:08 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.231 21:33:08 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.489 21:33:09 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:08.489 21:33:09 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:08.490 21:33:09 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.490 21:33:09 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.490 --rc genhtml_branch_coverage=1 00:06:08.490 --rc genhtml_function_coverage=1 00:06:08.490 --rc genhtml_legend=1 00:06:08.490 --rc geninfo_all_blocks=1 00:06:08.490 --rc geninfo_unexecuted_blocks=1 00:06:08.490 00:06:08.490 ' 00:06:08.490 21:33:09 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.490 --rc genhtml_branch_coverage=1 00:06:08.490 --rc genhtml_function_coverage=1 00:06:08.490 --rc genhtml_legend=1 00:06:08.490 --rc geninfo_all_blocks=1 00:06:08.490 --rc geninfo_unexecuted_blocks=1 00:06:08.490 00:06:08.490 ' 00:06:08.490 21:33:09 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.490 --rc genhtml_branch_coverage=1 00:06:08.490 --rc genhtml_function_coverage=1 00:06:08.490 --rc genhtml_legend=1 00:06:08.490 --rc geninfo_all_blocks=1 00:06:08.490 --rc geninfo_unexecuted_blocks=1 00:06:08.490 00:06:08.490 ' 00:06:08.490 21:33:09 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.490 --rc genhtml_branch_coverage=1 00:06:08.490 --rc genhtml_function_coverage=1 00:06:08.490 --rc genhtml_legend=1 00:06:08.490 --rc geninfo_all_blocks=1 00:06:08.490 --rc geninfo_unexecuted_blocks=1 00:06:08.490 00:06:08.490 ' 00:06:08.490 21:33:09 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.490 21:33:09 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.490 21:33:09 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.490 21:33:09 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.490 21:33:09 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.490 21:33:09 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:08.490 21:33:09 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.490 21:33:09 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.748 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.748 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.748 21:33:09 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:08.748 21:33:09 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:08.748 21:33:09 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:08.749 21:33:09 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:08.749 21:33:09 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:08.749 21:33:09 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:08.749 21:33:09 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:08.749 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.749 21:33:09 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.749 21:33:09 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:09.009 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:09.010 * spdk_dd linked to liburing 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:09.010 21:33:09 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:09.010 21:33:09 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:09.011 21:33:09 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:09.011 21:33:09 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:09.011 21:33:09 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:09.011 21:33:09 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:09.011 21:33:09 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:09.011 21:33:09 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:09.011 21:33:09 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:09.011 21:33:09 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:09.011 21:33:09 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.011 21:33:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:09.011 ************************************ 00:06:09.011 START TEST spdk_dd_basic_rw 00:06:09.011 ************************************ 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:09.011 * Looking for test storage... 00:06:09.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.011 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:09.271 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:09.271 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.271 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:09.271 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.271 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:09.271 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:09.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.272 --rc genhtml_branch_coverage=1 00:06:09.272 --rc genhtml_function_coverage=1 00:06:09.272 --rc genhtml_legend=1 00:06:09.272 --rc geninfo_all_blocks=1 00:06:09.272 --rc geninfo_unexecuted_blocks=1 00:06:09.272 00:06:09.272 ' 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:09.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.272 --rc genhtml_branch_coverage=1 00:06:09.272 --rc genhtml_function_coverage=1 00:06:09.272 --rc genhtml_legend=1 00:06:09.272 --rc geninfo_all_blocks=1 00:06:09.272 --rc geninfo_unexecuted_blocks=1 00:06:09.272 00:06:09.272 ' 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:09.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.272 --rc genhtml_branch_coverage=1 00:06:09.272 --rc genhtml_function_coverage=1 00:06:09.272 --rc genhtml_legend=1 00:06:09.272 --rc geninfo_all_blocks=1 00:06:09.272 --rc geninfo_unexecuted_blocks=1 00:06:09.272 00:06:09.272 ' 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:09.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.272 --rc genhtml_branch_coverage=1 00:06:09.272 --rc genhtml_function_coverage=1 00:06:09.272 --rc genhtml_legend=1 00:06:09.272 --rc geninfo_all_blocks=1 00:06:09.272 --rc geninfo_unexecuted_blocks=1 00:06:09.272 00:06:09.272 ' 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:09.272 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:09.273 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:09.273 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.274 21:33:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.274 ************************************ 00:06:09.274 START TEST dd_bs_lt_native_bs 00:06:09.274 ************************************ 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.274 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:09.533 { 00:06:09.533 "subsystems": [ 00:06:09.533 { 00:06:09.533 "subsystem": "bdev", 00:06:09.533 "config": [ 00:06:09.533 { 00:06:09.533 "params": { 00:06:09.533 "trtype": "pcie", 00:06:09.533 "traddr": "0000:00:10.0", 00:06:09.533 "name": "Nvme0" 00:06:09.533 }, 00:06:09.533 "method": "bdev_nvme_attach_controller" 00:06:09.533 }, 00:06:09.533 { 00:06:09.533 "method": "bdev_wait_for_examine" 00:06:09.533 } 00:06:09.533 ] 00:06:09.533 } 00:06:09.533 ] 00:06:09.533 } 00:06:09.533 [2024-12-10 21:33:10.064513] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:09.533 [2024-12-10 21:33:10.064614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59755 ] 00:06:09.533 [2024-12-10 21:33:10.217043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.533 [2024-12-10 21:33:10.276696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.798 [2024-12-10 21:33:10.320113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.798 [2024-12-10 21:33:10.421960] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:09.798 [2024-12-10 21:33:10.422071] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.798 [2024-12-10 21:33:10.504577] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:09.798 ************************************ 00:06:09.798 END TEST dd_bs_lt_native_bs 00:06:09.798 ************************************ 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.798 00:06:09.798 real 0m0.562s 00:06:09.798 user 0m0.395s 00:06:09.798 sys 0m0.128s 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.798 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.063 ************************************ 00:06:10.063 START TEST dd_rw 00:06:10.063 ************************************ 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:10.063 21:33:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.630 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:10.630 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:10.630 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.630 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.630 [2024-12-10 21:33:11.353196] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:10.630 [2024-12-10 21:33:11.353604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59786 ] 00:06:10.630 { 00:06:10.630 "subsystems": [ 00:06:10.630 { 00:06:10.630 "subsystem": "bdev", 00:06:10.630 "config": [ 00:06:10.630 { 00:06:10.630 "params": { 00:06:10.630 "trtype": "pcie", 00:06:10.630 "traddr": "0000:00:10.0", 00:06:10.630 "name": "Nvme0" 00:06:10.630 }, 00:06:10.630 "method": "bdev_nvme_attach_controller" 00:06:10.630 }, 00:06:10.630 { 00:06:10.630 "method": "bdev_wait_for_examine" 00:06:10.630 } 00:06:10.630 ] 00:06:10.630 } 00:06:10.630 ] 00:06:10.630 } 00:06:10.889 [2024-12-10 21:33:11.497423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.889 [2024-12-10 21:33:11.530719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.889 [2024-12-10 21:33:11.560970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.889  [2024-12-10T21:33:11.930Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:11.147 00:06:11.147 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:11.147 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:11.147 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.147 21:33:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.147 { 00:06:11.147 "subsystems": [ 00:06:11.147 { 00:06:11.147 "subsystem": "bdev", 00:06:11.147 "config": [ 00:06:11.147 { 00:06:11.147 "params": { 00:06:11.147 "trtype": "pcie", 00:06:11.147 "traddr": "0000:00:10.0", 00:06:11.147 "name": "Nvme0" 00:06:11.147 }, 00:06:11.147 "method": "bdev_nvme_attach_controller" 00:06:11.147 }, 00:06:11.147 { 00:06:11.147 "method": "bdev_wait_for_examine" 00:06:11.147 } 00:06:11.147 ] 00:06:11.147 } 00:06:11.147 ] 00:06:11.147 } 00:06:11.147 [2024-12-10 21:33:11.837967] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:11.147 [2024-12-10 21:33:11.838052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59799 ] 00:06:11.406 [2024-12-10 21:33:11.978672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.406 [2024-12-10 21:33:12.012515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.406 [2024-12-10 21:33:12.044200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.406  [2024-12-10T21:33:12.448Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:11.665 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.665 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.665 [2024-12-10 21:33:12.340092] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:11.665 [2024-12-10 21:33:12.340205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59815 ] 00:06:11.665 { 00:06:11.665 "subsystems": [ 00:06:11.665 { 00:06:11.665 "subsystem": "bdev", 00:06:11.665 "config": [ 00:06:11.665 { 00:06:11.665 "params": { 00:06:11.665 "trtype": "pcie", 00:06:11.665 "traddr": "0000:00:10.0", 00:06:11.665 "name": "Nvme0" 00:06:11.665 }, 00:06:11.665 "method": "bdev_nvme_attach_controller" 00:06:11.665 }, 00:06:11.665 { 00:06:11.665 "method": "bdev_wait_for_examine" 00:06:11.665 } 00:06:11.665 ] 00:06:11.665 } 00:06:11.665 ] 00:06:11.665 } 00:06:11.924 [2024-12-10 21:33:12.490751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.924 [2024-12-10 21:33:12.532825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.924 [2024-12-10 21:33:12.570416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.924  [2024-12-10T21:33:12.965Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:12.182 00:06:12.182 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.182 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:12.182 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:12.182 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:12.182 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:12.182 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.182 21:33:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.749 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:12.749 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.749 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.749 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.749 { 00:06:12.749 "subsystems": [ 00:06:12.749 { 00:06:12.749 "subsystem": "bdev", 00:06:12.749 "config": [ 00:06:12.749 { 00:06:12.749 "params": { 00:06:12.749 "trtype": "pcie", 00:06:12.749 "traddr": "0000:00:10.0", 00:06:12.749 "name": "Nvme0" 00:06:12.749 }, 00:06:12.749 "method": "bdev_nvme_attach_controller" 00:06:12.749 }, 00:06:12.749 { 00:06:12.749 "method": "bdev_wait_for_examine" 00:06:12.749 } 00:06:12.749 ] 00:06:12.749 } 00:06:12.749 ] 00:06:12.749 } 00:06:12.749 [2024-12-10 21:33:13.512364] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:12.749 [2024-12-10 21:33:13.512944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59834 ] 00:06:13.007 [2024-12-10 21:33:13.672187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.007 [2024-12-10 21:33:13.712489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.007 [2024-12-10 21:33:13.747630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.265  [2024-12-10T21:33:14.049Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:13.266 00:06:13.266 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:13.266 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:13.266 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.266 21:33:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.266 { 00:06:13.266 "subsystems": [ 00:06:13.266 { 00:06:13.266 "subsystem": "bdev", 00:06:13.266 "config": [ 00:06:13.266 { 00:06:13.266 "params": { 00:06:13.266 "trtype": "pcie", 00:06:13.266 "traddr": "0000:00:10.0", 00:06:13.266 "name": "Nvme0" 00:06:13.266 }, 00:06:13.266 "method": "bdev_nvme_attach_controller" 00:06:13.266 }, 00:06:13.266 { 00:06:13.266 "method": "bdev_wait_for_examine" 00:06:13.266 } 00:06:13.266 ] 00:06:13.266 } 00:06:13.266 ] 00:06:13.266 } 00:06:13.266 [2024-12-10 21:33:14.040795] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:13.266 [2024-12-10 21:33:14.040907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:06:13.524 [2024-12-10 21:33:14.188498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.524 [2024-12-10 21:33:14.221710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.524 [2024-12-10 21:33:14.252270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.782  [2024-12-10T21:33:14.566Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:13.783 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.783 21:33:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.783 [2024-12-10 21:33:14.541371] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:13.783 [2024-12-10 21:33:14.541740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59863 ] 00:06:13.783 { 00:06:13.783 "subsystems": [ 00:06:13.783 { 00:06:13.783 "subsystem": "bdev", 00:06:13.783 "config": [ 00:06:13.783 { 00:06:13.783 "params": { 00:06:13.783 "trtype": "pcie", 00:06:13.783 "traddr": "0000:00:10.0", 00:06:13.783 "name": "Nvme0" 00:06:13.783 }, 00:06:13.783 "method": "bdev_nvme_attach_controller" 00:06:13.783 }, 00:06:13.783 { 00:06:13.783 "method": "bdev_wait_for_examine" 00:06:13.783 } 00:06:13.783 ] 00:06:13.783 } 00:06:13.783 ] 00:06:13.783 } 00:06:14.041 [2024-12-10 21:33:14.689606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.041 [2024-12-10 21:33:14.730822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.041 [2024-12-10 21:33:14.766321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.299  [2024-12-10T21:33:15.082Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:14.299 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:14.299 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.233 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:15.233 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:15.233 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.233 21:33:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.233 [2024-12-10 21:33:15.740062] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:15.233 [2024-12-10 21:33:15.740608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59882 ] 00:06:15.233 { 00:06:15.233 "subsystems": [ 00:06:15.233 { 00:06:15.233 "subsystem": "bdev", 00:06:15.233 "config": [ 00:06:15.233 { 00:06:15.233 "params": { 00:06:15.233 "trtype": "pcie", 00:06:15.233 "traddr": "0000:00:10.0", 00:06:15.233 "name": "Nvme0" 00:06:15.233 }, 00:06:15.233 "method": "bdev_nvme_attach_controller" 00:06:15.233 }, 00:06:15.233 { 00:06:15.233 "method": "bdev_wait_for_examine" 00:06:15.233 } 00:06:15.233 ] 00:06:15.233 } 00:06:15.233 ] 00:06:15.233 } 00:06:15.233 [2024-12-10 21:33:15.890159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.233 [2024-12-10 21:33:15.938035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.233 [2024-12-10 21:33:15.972412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.491  [2024-12-10T21:33:16.274Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:15.491 00:06:15.491 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:15.491 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:15.491 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.491 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.749 { 00:06:15.749 "subsystems": [ 00:06:15.749 { 00:06:15.749 "subsystem": "bdev", 00:06:15.749 "config": [ 00:06:15.749 { 00:06:15.749 "params": { 00:06:15.749 "trtype": "pcie", 00:06:15.749 "traddr": "0000:00:10.0", 00:06:15.749 "name": "Nvme0" 00:06:15.749 }, 00:06:15.750 "method": "bdev_nvme_attach_controller" 00:06:15.750 }, 00:06:15.750 { 00:06:15.750 "method": "bdev_wait_for_examine" 00:06:15.750 } 00:06:15.750 ] 00:06:15.750 } 00:06:15.750 ] 00:06:15.750 } 00:06:15.750 [2024-12-10 21:33:16.294366] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:15.750 [2024-12-10 21:33:16.294522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59901 ] 00:06:15.750 [2024-12-10 21:33:16.451393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.750 [2024-12-10 21:33:16.510979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.008 [2024-12-10 21:33:16.553425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.008  [2024-12-10T21:33:17.050Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:16.267 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.267 21:33:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.267 { 00:06:16.267 "subsystems": [ 00:06:16.267 { 00:06:16.267 "subsystem": "bdev", 00:06:16.267 "config": [ 00:06:16.267 { 00:06:16.267 "params": { 00:06:16.267 "trtype": "pcie", 00:06:16.267 "traddr": "0000:00:10.0", 00:06:16.267 "name": "Nvme0" 00:06:16.267 }, 00:06:16.267 "method": "bdev_nvme_attach_controller" 00:06:16.267 }, 00:06:16.267 { 00:06:16.267 "method": "bdev_wait_for_examine" 00:06:16.267 } 00:06:16.267 ] 00:06:16.267 } 00:06:16.267 ] 00:06:16.267 } 00:06:16.267 [2024-12-10 21:33:16.881810] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:16.267 [2024-12-10 21:33:16.881969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59911 ] 00:06:16.267 [2024-12-10 21:33:17.033902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.525 [2024-12-10 21:33:17.083546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.525 [2024-12-10 21:33:17.119611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.525  [2024-12-10T21:33:17.566Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:16.783 00:06:16.783 21:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:16.783 21:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:16.783 21:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:16.783 21:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:16.783 21:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:16.783 21:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:16.783 21:33:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.350 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:17.350 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:17.350 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.350 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.350 { 00:06:17.350 "subsystems": [ 00:06:17.350 { 00:06:17.350 "subsystem": "bdev", 00:06:17.350 "config": [ 00:06:17.350 { 00:06:17.350 "params": { 00:06:17.350 "trtype": "pcie", 00:06:17.350 "traddr": "0000:00:10.0", 00:06:17.350 "name": "Nvme0" 00:06:17.350 }, 00:06:17.350 "method": "bdev_nvme_attach_controller" 00:06:17.350 }, 00:06:17.350 { 00:06:17.350 "method": "bdev_wait_for_examine" 00:06:17.350 } 00:06:17.350 ] 00:06:17.350 } 00:06:17.350 ] 00:06:17.350 } 00:06:17.350 [2024-12-10 21:33:18.096757] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:17.350 [2024-12-10 21:33:18.096909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:06:17.608 [2024-12-10 21:33:18.251419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.608 [2024-12-10 21:33:18.284901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.608 [2024-12-10 21:33:18.315854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.869  [2024-12-10T21:33:18.652Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:17.869 00:06:17.869 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:17.869 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:17.869 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.869 21:33:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.869 { 00:06:17.869 "subsystems": [ 00:06:17.869 { 00:06:17.869 "subsystem": "bdev", 00:06:17.869 "config": [ 00:06:17.869 { 00:06:17.869 "params": { 00:06:17.869 "trtype": "pcie", 00:06:17.869 "traddr": "0000:00:10.0", 00:06:17.869 "name": "Nvme0" 00:06:17.869 }, 00:06:17.869 "method": "bdev_nvme_attach_controller" 00:06:17.869 }, 00:06:17.869 { 00:06:17.869 "method": "bdev_wait_for_examine" 00:06:17.869 } 00:06:17.869 ] 00:06:17.869 } 00:06:17.869 ] 00:06:17.869 } 00:06:18.128 [2024-12-10 21:33:18.653330] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:18.129 [2024-12-10 21:33:18.653505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59949 ] 00:06:18.129 [2024-12-10 21:33:18.808225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.129 [2024-12-10 21:33:18.858730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.129 [2024-12-10 21:33:18.895843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.387  [2024-12-10T21:33:19.170Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:18.387 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.387 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.645 { 00:06:18.645 "subsystems": [ 00:06:18.645 { 00:06:18.645 "subsystem": "bdev", 00:06:18.645 "config": [ 00:06:18.645 { 00:06:18.645 "params": { 00:06:18.645 "trtype": "pcie", 00:06:18.645 "traddr": "0000:00:10.0", 00:06:18.645 "name": "Nvme0" 00:06:18.645 }, 00:06:18.646 "method": "bdev_nvme_attach_controller" 00:06:18.646 }, 00:06:18.646 { 00:06:18.646 "method": "bdev_wait_for_examine" 00:06:18.646 } 00:06:18.646 ] 00:06:18.646 } 00:06:18.646 ] 00:06:18.646 } 00:06:18.646 [2024-12-10 21:33:19.209196] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:18.646 [2024-12-10 21:33:19.209342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59972 ] 00:06:18.646 [2024-12-10 21:33:19.359105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.646 [2024-12-10 21:33:19.393384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.646 [2024-12-10 21:33:19.425425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.904  [2024-12-10T21:33:19.687Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:18.904 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:19.162 21:33:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.728 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:19.728 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:19.728 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.728 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.728 { 00:06:19.728 "subsystems": [ 00:06:19.728 { 00:06:19.728 "subsystem": "bdev", 00:06:19.728 "config": [ 00:06:19.728 { 00:06:19.728 "params": { 00:06:19.728 "trtype": "pcie", 00:06:19.728 "traddr": "0000:00:10.0", 00:06:19.728 "name": "Nvme0" 00:06:19.728 }, 00:06:19.728 "method": "bdev_nvme_attach_controller" 00:06:19.728 }, 00:06:19.728 { 00:06:19.728 "method": "bdev_wait_for_examine" 00:06:19.728 } 00:06:19.728 ] 00:06:19.728 } 00:06:19.728 ] 00:06:19.728 } 00:06:19.728 [2024-12-10 21:33:20.298332] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:19.728 [2024-12-10 21:33:20.298519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:06:19.728 [2024-12-10 21:33:20.449337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.728 [2024-12-10 21:33:20.499516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.986 [2024-12-10 21:33:20.538156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.986  [2024-12-10T21:33:21.027Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:20.244 00:06:20.244 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:20.244 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:20.244 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.244 21:33:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.244 { 00:06:20.244 "subsystems": [ 00:06:20.244 { 00:06:20.244 "subsystem": "bdev", 00:06:20.244 "config": [ 00:06:20.244 { 00:06:20.244 "params": { 00:06:20.244 "trtype": "pcie", 00:06:20.244 "traddr": "0000:00:10.0", 00:06:20.244 "name": "Nvme0" 00:06:20.244 }, 00:06:20.244 "method": "bdev_nvme_attach_controller" 00:06:20.244 }, 00:06:20.244 { 00:06:20.244 "method": "bdev_wait_for_examine" 00:06:20.244 } 00:06:20.244 ] 00:06:20.244 } 00:06:20.244 ] 00:06:20.244 } 00:06:20.244 [2024-12-10 21:33:20.850652] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:20.244 [2024-12-10 21:33:20.850787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59999 ] 00:06:20.244 [2024-12-10 21:33:21.003925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.502 [2024-12-10 21:33:21.038278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.502 [2024-12-10 21:33:21.069868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.502  [2024-12-10T21:33:21.543Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:20.760 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.760 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.760 [2024-12-10 21:33:21.347809] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:20.760 [2024-12-10 21:33:21.347911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60020 ] 00:06:20.760 { 00:06:20.760 "subsystems": [ 00:06:20.760 { 00:06:20.760 "subsystem": "bdev", 00:06:20.760 "config": [ 00:06:20.760 { 00:06:20.760 "params": { 00:06:20.760 "trtype": "pcie", 00:06:20.760 "traddr": "0000:00:10.0", 00:06:20.760 "name": "Nvme0" 00:06:20.760 }, 00:06:20.760 "method": "bdev_nvme_attach_controller" 00:06:20.760 }, 00:06:20.760 { 00:06:20.760 "method": "bdev_wait_for_examine" 00:06:20.760 } 00:06:20.760 ] 00:06:20.760 } 00:06:20.760 ] 00:06:20.760 } 00:06:20.760 [2024-12-10 21:33:21.490943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.760 [2024-12-10 21:33:21.528737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.018 [2024-12-10 21:33:21.561835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.018  [2024-12-10T21:33:21.801Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:21.018 00:06:21.018 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:21.018 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:21.018 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:21.018 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:21.018 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:21.018 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:21.018 21:33:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.584 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:21.584 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:21.584 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.584 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.584 { 00:06:21.584 "subsystems": [ 00:06:21.584 { 00:06:21.584 "subsystem": "bdev", 00:06:21.584 "config": [ 00:06:21.584 { 00:06:21.584 "params": { 00:06:21.584 "trtype": "pcie", 00:06:21.584 "traddr": "0000:00:10.0", 00:06:21.584 "name": "Nvme0" 00:06:21.584 }, 00:06:21.584 "method": "bdev_nvme_attach_controller" 00:06:21.584 }, 00:06:21.584 { 00:06:21.584 "method": "bdev_wait_for_examine" 00:06:21.584 } 00:06:21.584 ] 00:06:21.584 } 00:06:21.584 ] 00:06:21.584 } 00:06:21.842 [2024-12-10 21:33:22.370822] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:21.842 [2024-12-10 21:33:22.370974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60039 ] 00:06:21.842 [2024-12-10 21:33:22.521732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.842 [2024-12-10 21:33:22.571730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.842 [2024-12-10 21:33:22.609863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.099  [2024-12-10T21:33:22.882Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:22.099 00:06:22.099 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:22.099 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:22.099 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.099 21:33:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.358 [2024-12-10 21:33:22.910276] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:22.358 [2024-12-10 21:33:22.910412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60047 ] 00:06:22.358 { 00:06:22.358 "subsystems": [ 00:06:22.358 { 00:06:22.358 "subsystem": "bdev", 00:06:22.358 "config": [ 00:06:22.358 { 00:06:22.358 "params": { 00:06:22.358 "trtype": "pcie", 00:06:22.358 "traddr": "0000:00:10.0", 00:06:22.358 "name": "Nvme0" 00:06:22.358 }, 00:06:22.358 "method": "bdev_nvme_attach_controller" 00:06:22.358 }, 00:06:22.358 { 00:06:22.358 "method": "bdev_wait_for_examine" 00:06:22.358 } 00:06:22.358 ] 00:06:22.358 } 00:06:22.358 ] 00:06:22.358 } 00:06:22.358 [2024-12-10 21:33:23.058114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.358 [2024-12-10 21:33:23.093010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.358 [2024-12-10 21:33:23.124206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.617  [2024-12-10T21:33:23.400Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:22.617 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.617 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.875 { 00:06:22.875 "subsystems": [ 00:06:22.875 { 00:06:22.875 "subsystem": "bdev", 00:06:22.875 "config": [ 00:06:22.875 { 00:06:22.875 "params": { 00:06:22.875 "trtype": "pcie", 00:06:22.875 "traddr": "0000:00:10.0", 00:06:22.875 "name": "Nvme0" 00:06:22.875 }, 00:06:22.875 "method": "bdev_nvme_attach_controller" 00:06:22.875 }, 00:06:22.875 { 00:06:22.875 "method": "bdev_wait_for_examine" 00:06:22.875 } 00:06:22.875 ] 00:06:22.875 } 00:06:22.875 ] 00:06:22.875 } 00:06:22.875 [2024-12-10 21:33:23.425273] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:22.875 [2024-12-10 21:33:23.425410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60068 ] 00:06:22.875 [2024-12-10 21:33:23.576845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.875 [2024-12-10 21:33:23.615133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.875 [2024-12-10 21:33:23.646269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.133  [2024-12-10T21:33:23.916Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:23.133 00:06:23.133 00:06:23.133 real 0m13.265s 00:06:23.133 user 0m10.034s 00:06:23.133 sys 0m3.958s 00:06:23.133 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.133 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.133 ************************************ 00:06:23.133 END TEST dd_rw 00:06:23.133 ************************************ 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.392 ************************************ 00:06:23.392 START TEST dd_rw_offset 00:06:23.392 ************************************ 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=x95ro72f3tl8wwmfjf4hqoqfkbun5ad2ouke8ldl66a3hoesbl3jgy622tk8dt5bsaxh581os07baxubdqq9jnfqf9g9avpfq5ehzm72r8qgbdixyuigjwn2vixt65yad6ummz4bs69v3m758nz57uu3v7ouprtdok5hp2x1dulccyrueizkwcaid4l2c2rptlt1mg07yzhyokqa8wa9loooljs7nhoyfmrni43zhadgwgovgj9pypul29q2ic1txudw0ckg28ayf00q7v7j3snu4tv25g7pckph5lig4d1t3ddrekmwwsawudmbaao7ezrvns5gz2rswaerupsa2i6wv5hb9h3xlkhkpdwy1a1uo8a0zdanu3ofpkp64px57j98a2tkr2blu1e25gx6klqtn2wnutyagpwbw1xwtiqoko7ikxxuph65mufg1rj8ug6tbun95dld4at25csqhw39v9iubvtl5tbsmfyor5nmgfn36bhh2g4fc997vskiawy6f6oi9vxe4v6sfk4s3n3wmo8rguyry5vv09j5fc9m8ozs8ejwfvj7bfwjch05mgbkq6g8c6g1pki9l4f5w61pvkuatk3222vahpdjzccj1d9sumj6hznfci7zad8ew2ty99z50wi2optix7f6wgg289iv2jy5bvyore7xoogvs07mskziw9d905t8o93djuc4t7z7nuovym6moxck60pyceiorpf3h5yec887zsb9k6wzuo0x7h67qifl6z40oyoyu6k85h7qv7hza5uwg85stq6yrae7ketjvtfeo0unhrzb7p5pz52jpia242m6c6tca534bxhnl84pf2mybznjt9nhh1x7mwn96tzg2mzg643rmnila2lzgp52b9p7q989pckpjrswz5ml5u0jdaneahhom2q3u0vpop09hyl78s70nu19ibrmanadlqijsvluxjp6282yxgfd7j2upw70wkuas9095kgfdbiwslqnoh17ooxo46jzta9fbcm7pfyizhuvz9t22v7gz9nt87h35egtdrfbh3i2utue3keim12vcgwla6bkmiao43puazg6u2op03ojxq5utn4l3abk2yc1oso67p8aunvios5vrctp9krl6uduw8c9zgi49awjbq7j0da5rnoo1ew34eefykobbtl29u9uk421anip5yw4k264lug7tmbhdpq3tclbpisys9bezs9u5s7wgk3wqjh3kt8lf5fkzpg4r5cgg7avr39tbglf2ybu7jgwe7pns7bz00fb86v1nf9uqgvf1ydw2ql6swy9n3emar8x5t6pk1x8ixmxt5ztyoei113or9ykj3zgcmhve03jds0da9t0nyll9f7tkn2tjuy19es3y99xvr9bl5dqi246wzf7anw7jw429cpu2fbrsc8j4lyfiyqd9efh01lpf9kilgyxpn5g3hftfmpuraoanzjncardgth5e65p1edrkecop5uwmh7tczop2skls6ayb5etxm4acjrh43zgb4irj5weezja5bp2wx3j0fzvqirap8vs0ar3mw2zvt56xhl0p3le20bihj585o12tcvro66lvhq1gxl3wiwyg6rbtx906xe3gf8t7t9y03th91c9dmt5gw4a4x3rko6d0l56fwzytadddh7i3o0b6lop8tg3pygnhk1lgcxky5lwdivdb748n2u4c73asg3lsgej1bqa4ci2ydecwl2yqlo7qmc6w1gid4hlobo2n1xqhaqhwnysin7b2kdlc9zdta0zx3jzk2yokk5q78xepgkgzgya40l4z4jvkawshnvnns1gwpxu9jeel6kraop5344yzymxtavss7b85b6p7ljy6j1ki8iu2t7390j7zoxmcd3mjkxwoouqsjtdai5pt2eqh9tuco465oqe4otmaq4m3q0vqwfqhd9eyqe9emtyjnrxjrj5sss52m9bq7rddi4f72tg6k58rxcb33k2xmv10du5fymyz0dgnyjge3s2gljfgm430bqr4du7drqytsuyn1d25m4tdblbb17ygnlv1wrfoingz499yjz7k92lnfuo79upvhygrhe564nrnzxghemun0iumgwxl04t0a7b36kh7fbydu93vekd5lbi4ymrco1u5w9orakkrnoyaqgj2drca13nmyuho7vebg6d5vupf6uvyb5n1t5vdal03dpgi0gdbi2baalieg5mwtb0wf384he1tsb4cvm7qu6e8ted4i3es0n3mm3ze8bmbu5wxtdbw9xs7akkhbrnq6a12kh3dyfa2fptx2tde53dfxy0m49m2b8ubvxek8iev1dk2kxwtnk5vz8trab6vavef95yc5ctt2lgdudhe2g3xspy5qovopxcln6pmtr9uc1y9nksudh1khxu2vj1chekl3i2if983v95nm6dc60mngel0d7r4nftbmmm2byudipyc9vj9kd5ut7s5whghvds6a37ets2l3gzzcg4mme556a2zf06g59bka3kt3xrkxvrv00hqiux1lzmej8gf6dt2srhi6ewedjo73ciajn2gibabz4msm1bcz2jafp8nbwtfk2pp75x66jd04h3gmgqkq5mybsw54eyiv4z9vx41b8ytyuaqhnupkmt2t1j0ihrp3vpdknnmrn62z61gfekxypdykjs3z8x15vcofdv42cq8p00ivzbwkjsk5gvaegmmohdgx4kkbp9eic58pih8f6nrzotvfgzt5wl1swe16fdhkzqvuj8p0u07tkaqxa7y4f8mu2u12n40oenc51fsj2q1vb9cbxq2olcik8z5h2djlurvo6kjd3v15tn7y3gfboq6ttk1beo5pbqr2a5vv7zerkslzarpitzy1vvh56cqui3yi9qt6mx2bprm8ceqwwzunyrdcy37hcoc20wjwcozeodl44cjrz50h3qdugwdn16l3hoblot4atvxp31lfz5gj8mcnyyiahpgt5eabajgxzkd5debwvry0wvz24w4g6aefidvf0rknppjdoyzsmrbhbtegso2saqd6j6n1adeov60bhe35tb1jfhftux59d8fytwrxg8xhjav75aqugw4lg4vsw086iwocgx908cmm2w32lwb4a7ofigfnrp1nddwpburp2yayhfke0dajx7iiowugtuxrma38su1wuy5ktmxla3vtjyog3vplxqw9rfcvhilomsl0sqynpbs1miqmlz3ag4jk0wkjwfdy44is0csn2e9k2tflpwym8ww72tkcps0qpefmsppo8fms82lcj9m43az1z4cx3k4qh2ogcx9nf2lwh7q0qmu5fg72wctheyi6f4w86lpqltuukhu0rx4ner3lfof3933ic7s5w6bgs148vkyyf5pdnlrzcmzbsu876u4xuqojc655n9b7ps368nlccjvayu91nqwp0918ucgm2kscdc74enx7s95vjkg0bpnwkot1kn60o2ptxud72s5wcj89ryrh073g9e1ji59anhbbj92klhh7q8756ewot8ookawb5yqbfwu6rgbcnks95hqhof68ic30o4t9lt8mypkw3ky0r41qi188f04vskrgjdg0sunusw9rcgmk75xu28fkejzie8pkhnfk0hs98lxm9ywcknnye1ba8c6eegq3qg3mx2ofh09i7nkyb8g4p3jcflg80zh6v8ajh80mpi8brnl3nb7h8qfj496p03flaaf355tl69oauncd44kdzwyhxpzu8ol91yjo2btvhgw2x8twx9vmz2315mod11njfrta78vjhlcnwbzu47bd6fmmmad5al0v62zgophle8th9kxkrlpzzkhzsfseaau4mte8538w0inyuk3s0277i84up5khi8d5la55qb8p8jmd37fee0qc4q865tcq29xhttg9nctv6c6w7bvdfl8lk3azcj5yw7jzxm7acdrz8sk0sqkjs43y5z3g72ef7lh5bmjospgiqyiq836u820r63gwuuus4peaclauaw6n5lcf5moe14e8cxhxx7qgfva1xwyqiu2krxyp9oqiy6q3at42i7 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:23.392 21:33:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:23.392 [2024-12-10 21:33:24.035847] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:23.392 [2024-12-10 21:33:24.036584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60093 ] 00:06:23.392 { 00:06:23.392 "subsystems": [ 00:06:23.392 { 00:06:23.392 "subsystem": "bdev", 00:06:23.392 "config": [ 00:06:23.392 { 00:06:23.392 "params": { 00:06:23.392 "trtype": "pcie", 00:06:23.392 "traddr": "0000:00:10.0", 00:06:23.392 "name": "Nvme0" 00:06:23.392 }, 00:06:23.392 "method": "bdev_nvme_attach_controller" 00:06:23.392 }, 00:06:23.392 { 00:06:23.392 "method": "bdev_wait_for_examine" 00:06:23.392 } 00:06:23.392 ] 00:06:23.392 } 00:06:23.392 ] 00:06:23.392 } 00:06:23.651 [2024-12-10 21:33:24.194187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.651 [2024-12-10 21:33:24.236290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.651 [2024-12-10 21:33:24.270333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.651  [2024-12-10T21:33:24.692Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:23.909 00:06:23.909 21:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:23.909 21:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:23.909 21:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:23.909 21:33:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:23.909 { 00:06:23.909 "subsystems": [ 00:06:23.909 { 00:06:23.909 "subsystem": "bdev", 00:06:23.909 "config": [ 00:06:23.909 { 00:06:23.909 "params": { 00:06:23.909 "trtype": "pcie", 00:06:23.909 "traddr": "0000:00:10.0", 00:06:23.909 "name": "Nvme0" 00:06:23.909 }, 00:06:23.909 "method": "bdev_nvme_attach_controller" 00:06:23.909 }, 00:06:23.909 { 00:06:23.909 "method": "bdev_wait_for_examine" 00:06:23.909 } 00:06:23.909 ] 00:06:23.909 } 00:06:23.909 ] 00:06:23.909 } 00:06:23.909 [2024-12-10 21:33:24.560496] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:23.909 [2024-12-10 21:33:24.560594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60112 ] 00:06:24.210 [2024-12-10 21:33:24.703592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.210 [2024-12-10 21:33:24.754672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.210 [2024-12-10 21:33:24.792491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.210  [2024-12-10T21:33:25.250Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:24.467 00:06:24.467 21:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:24.467 21:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ x95ro72f3tl8wwmfjf4hqoqfkbun5ad2ouke8ldl66a3hoesbl3jgy622tk8dt5bsaxh581os07baxubdqq9jnfqf9g9avpfq5ehzm72r8qgbdixyuigjwn2vixt65yad6ummz4bs69v3m758nz57uu3v7ouprtdok5hp2x1dulccyrueizkwcaid4l2c2rptlt1mg07yzhyokqa8wa9loooljs7nhoyfmrni43zhadgwgovgj9pypul29q2ic1txudw0ckg28ayf00q7v7j3snu4tv25g7pckph5lig4d1t3ddrekmwwsawudmbaao7ezrvns5gz2rswaerupsa2i6wv5hb9h3xlkhkpdwy1a1uo8a0zdanu3ofpkp64px57j98a2tkr2blu1e25gx6klqtn2wnutyagpwbw1xwtiqoko7ikxxuph65mufg1rj8ug6tbun95dld4at25csqhw39v9iubvtl5tbsmfyor5nmgfn36bhh2g4fc997vskiawy6f6oi9vxe4v6sfk4s3n3wmo8rguyry5vv09j5fc9m8ozs8ejwfvj7bfwjch05mgbkq6g8c6g1pki9l4f5w61pvkuatk3222vahpdjzccj1d9sumj6hznfci7zad8ew2ty99z50wi2optix7f6wgg289iv2jy5bvyore7xoogvs07mskziw9d905t8o93djuc4t7z7nuovym6moxck60pyceiorpf3h5yec887zsb9k6wzuo0x7h67qifl6z40oyoyu6k85h7qv7hza5uwg85stq6yrae7ketjvtfeo0unhrzb7p5pz52jpia242m6c6tca534bxhnl84pf2mybznjt9nhh1x7mwn96tzg2mzg643rmnila2lzgp52b9p7q989pckpjrswz5ml5u0jdaneahhom2q3u0vpop09hyl78s70nu19ibrmanadlqijsvluxjp6282yxgfd7j2upw70wkuas9095kgfdbiwslqnoh17ooxo46jzta9fbcm7pfyizhuvz9t22v7gz9nt87h35egtdrfbh3i2utue3keim12vcgwla6bkmiao43puazg6u2op03ojxq5utn4l3abk2yc1oso67p8aunvios5vrctp9krl6uduw8c9zgi49awjbq7j0da5rnoo1ew34eefykobbtl29u9uk421anip5yw4k264lug7tmbhdpq3tclbpisys9bezs9u5s7wgk3wqjh3kt8lf5fkzpg4r5cgg7avr39tbglf2ybu7jgwe7pns7bz00fb86v1nf9uqgvf1ydw2ql6swy9n3emar8x5t6pk1x8ixmxt5ztyoei113or9ykj3zgcmhve03jds0da9t0nyll9f7tkn2tjuy19es3y99xvr9bl5dqi246wzf7anw7jw429cpu2fbrsc8j4lyfiyqd9efh01lpf9kilgyxpn5g3hftfmpuraoanzjncardgth5e65p1edrkecop5uwmh7tczop2skls6ayb5etxm4acjrh43zgb4irj5weezja5bp2wx3j0fzvqirap8vs0ar3mw2zvt56xhl0p3le20bihj585o12tcvro66lvhq1gxl3wiwyg6rbtx906xe3gf8t7t9y03th91c9dmt5gw4a4x3rko6d0l56fwzytadddh7i3o0b6lop8tg3pygnhk1lgcxky5lwdivdb748n2u4c73asg3lsgej1bqa4ci2ydecwl2yqlo7qmc6w1gid4hlobo2n1xqhaqhwnysin7b2kdlc9zdta0zx3jzk2yokk5q78xepgkgzgya40l4z4jvkawshnvnns1gwpxu9jeel6kraop5344yzymxtavss7b85b6p7ljy6j1ki8iu2t7390j7zoxmcd3mjkxwoouqsjtdai5pt2eqh9tuco465oqe4otmaq4m3q0vqwfqhd9eyqe9emtyjnrxjrj5sss52m9bq7rddi4f72tg6k58rxcb33k2xmv10du5fymyz0dgnyjge3s2gljfgm430bqr4du7drqytsuyn1d25m4tdblbb17ygnlv1wrfoingz499yjz7k92lnfuo79upvhygrhe564nrnzxghemun0iumgwxl04t0a7b36kh7fbydu93vekd5lbi4ymrco1u5w9orakkrnoyaqgj2drca13nmyuho7vebg6d5vupf6uvyb5n1t5vdal03dpgi0gdbi2baalieg5mwtb0wf384he1tsb4cvm7qu6e8ted4i3es0n3mm3ze8bmbu5wxtdbw9xs7akkhbrnq6a12kh3dyfa2fptx2tde53dfxy0m49m2b8ubvxek8iev1dk2kxwtnk5vz8trab6vavef95yc5ctt2lgdudhe2g3xspy5qovopxcln6pmtr9uc1y9nksudh1khxu2vj1chekl3i2if983v95nm6dc60mngel0d7r4nftbmmm2byudipyc9vj9kd5ut7s5whghvds6a37ets2l3gzzcg4mme556a2zf06g59bka3kt3xrkxvrv00hqiux1lzmej8gf6dt2srhi6ewedjo73ciajn2gibabz4msm1bcz2jafp8nbwtfk2pp75x66jd04h3gmgqkq5mybsw54eyiv4z9vx41b8ytyuaqhnupkmt2t1j0ihrp3vpdknnmrn62z61gfekxypdykjs3z8x15vcofdv42cq8p00ivzbwkjsk5gvaegmmohdgx4kkbp9eic58pih8f6nrzotvfgzt5wl1swe16fdhkzqvuj8p0u07tkaqxa7y4f8mu2u12n40oenc51fsj2q1vb9cbxq2olcik8z5h2djlurvo6kjd3v15tn7y3gfboq6ttk1beo5pbqr2a5vv7zerkslzarpitzy1vvh56cqui3yi9qt6mx2bprm8ceqwwzunyrdcy37hcoc20wjwcozeodl44cjrz50h3qdugwdn16l3hoblot4atvxp31lfz5gj8mcnyyiahpgt5eabajgxzkd5debwvry0wvz24w4g6aefidvf0rknppjdoyzsmrbhbtegso2saqd6j6n1adeov60bhe35tb1jfhftux59d8fytwrxg8xhjav75aqugw4lg4vsw086iwocgx908cmm2w32lwb4a7ofigfnrp1nddwpburp2yayhfke0dajx7iiowugtuxrma38su1wuy5ktmxla3vtjyog3vplxqw9rfcvhilomsl0sqynpbs1miqmlz3ag4jk0wkjwfdy44is0csn2e9k2tflpwym8ww72tkcps0qpefmsppo8fms82lcj9m43az1z4cx3k4qh2ogcx9nf2lwh7q0qmu5fg72wctheyi6f4w86lpqltuukhu0rx4ner3lfof3933ic7s5w6bgs148vkyyf5pdnlrzcmzbsu876u4xuqojc655n9b7ps368nlccjvayu91nqwp0918ucgm2kscdc74enx7s95vjkg0bpnwkot1kn60o2ptxud72s5wcj89ryrh073g9e1ji59anhbbj92klhh7q8756ewot8ookawb5yqbfwu6rgbcnks95hqhof68ic30o4t9lt8mypkw3ky0r41qi188f04vskrgjdg0sunusw9rcgmk75xu28fkejzie8pkhnfk0hs98lxm9ywcknnye1ba8c6eegq3qg3mx2ofh09i7nkyb8g4p3jcflg80zh6v8ajh80mpi8brnl3nb7h8qfj496p03flaaf355tl69oauncd44kdzwyhxpzu8ol91yjo2btvhgw2x8twx9vmz2315mod11njfrta78vjhlcnwbzu47bd6fmmmad5al0v62zgophle8th9kxkrlpzzkhzsfseaau4mte8538w0inyuk3s0277i84up5khi8d5la55qb8p8jmd37fee0qc4q865tcq29xhttg9nctv6c6w7bvdfl8lk3azcj5yw7jzxm7acdrz8sk0sqkjs43y5z3g72ef7lh5bmjospgiqyiq836u820r63gwuuus4peaclauaw6n5lcf5moe14e8cxhxx7qgfva1xwyqiu2krxyp9oqiy6q3at42i7 == \x\9\5\r\o\7\2\f\3\t\l\8\w\w\m\f\j\f\4\h\q\o\q\f\k\b\u\n\5\a\d\2\o\u\k\e\8\l\d\l\6\6\a\3\h\o\e\s\b\l\3\j\g\y\6\2\2\t\k\8\d\t\5\b\s\a\x\h\5\8\1\o\s\0\7\b\a\x\u\b\d\q\q\9\j\n\f\q\f\9\g\9\a\v\p\f\q\5\e\h\z\m\7\2\r\8\q\g\b\d\i\x\y\u\i\g\j\w\n\2\v\i\x\t\6\5\y\a\d\6\u\m\m\z\4\b\s\6\9\v\3\m\7\5\8\n\z\5\7\u\u\3\v\7\o\u\p\r\t\d\o\k\5\h\p\2\x\1\d\u\l\c\c\y\r\u\e\i\z\k\w\c\a\i\d\4\l\2\c\2\r\p\t\l\t\1\m\g\0\7\y\z\h\y\o\k\q\a\8\w\a\9\l\o\o\o\l\j\s\7\n\h\o\y\f\m\r\n\i\4\3\z\h\a\d\g\w\g\o\v\g\j\9\p\y\p\u\l\2\9\q\2\i\c\1\t\x\u\d\w\0\c\k\g\2\8\a\y\f\0\0\q\7\v\7\j\3\s\n\u\4\t\v\2\5\g\7\p\c\k\p\h\5\l\i\g\4\d\1\t\3\d\d\r\e\k\m\w\w\s\a\w\u\d\m\b\a\a\o\7\e\z\r\v\n\s\5\g\z\2\r\s\w\a\e\r\u\p\s\a\2\i\6\w\v\5\h\b\9\h\3\x\l\k\h\k\p\d\w\y\1\a\1\u\o\8\a\0\z\d\a\n\u\3\o\f\p\k\p\6\4\p\x\5\7\j\9\8\a\2\t\k\r\2\b\l\u\1\e\2\5\g\x\6\k\l\q\t\n\2\w\n\u\t\y\a\g\p\w\b\w\1\x\w\t\i\q\o\k\o\7\i\k\x\x\u\p\h\6\5\m\u\f\g\1\r\j\8\u\g\6\t\b\u\n\9\5\d\l\d\4\a\t\2\5\c\s\q\h\w\3\9\v\9\i\u\b\v\t\l\5\t\b\s\m\f\y\o\r\5\n\m\g\f\n\3\6\b\h\h\2\g\4\f\c\9\9\7\v\s\k\i\a\w\y\6\f\6\o\i\9\v\x\e\4\v\6\s\f\k\4\s\3\n\3\w\m\o\8\r\g\u\y\r\y\5\v\v\0\9\j\5\f\c\9\m\8\o\z\s\8\e\j\w\f\v\j\7\b\f\w\j\c\h\0\5\m\g\b\k\q\6\g\8\c\6\g\1\p\k\i\9\l\4\f\5\w\6\1\p\v\k\u\a\t\k\3\2\2\2\v\a\h\p\d\j\z\c\c\j\1\d\9\s\u\m\j\6\h\z\n\f\c\i\7\z\a\d\8\e\w\2\t\y\9\9\z\5\0\w\i\2\o\p\t\i\x\7\f\6\w\g\g\2\8\9\i\v\2\j\y\5\b\v\y\o\r\e\7\x\o\o\g\v\s\0\7\m\s\k\z\i\w\9\d\9\0\5\t\8\o\9\3\d\j\u\c\4\t\7\z\7\n\u\o\v\y\m\6\m\o\x\c\k\6\0\p\y\c\e\i\o\r\p\f\3\h\5\y\e\c\8\8\7\z\s\b\9\k\6\w\z\u\o\0\x\7\h\6\7\q\i\f\l\6\z\4\0\o\y\o\y\u\6\k\8\5\h\7\q\v\7\h\z\a\5\u\w\g\8\5\s\t\q\6\y\r\a\e\7\k\e\t\j\v\t\f\e\o\0\u\n\h\r\z\b\7\p\5\p\z\5\2\j\p\i\a\2\4\2\m\6\c\6\t\c\a\5\3\4\b\x\h\n\l\8\4\p\f\2\m\y\b\z\n\j\t\9\n\h\h\1\x\7\m\w\n\9\6\t\z\g\2\m\z\g\6\4\3\r\m\n\i\l\a\2\l\z\g\p\5\2\b\9\p\7\q\9\8\9\p\c\k\p\j\r\s\w\z\5\m\l\5\u\0\j\d\a\n\e\a\h\h\o\m\2\q\3\u\0\v\p\o\p\0\9\h\y\l\7\8\s\7\0\n\u\1\9\i\b\r\m\a\n\a\d\l\q\i\j\s\v\l\u\x\j\p\6\2\8\2\y\x\g\f\d\7\j\2\u\p\w\7\0\w\k\u\a\s\9\0\9\5\k\g\f\d\b\i\w\s\l\q\n\o\h\1\7\o\o\x\o\4\6\j\z\t\a\9\f\b\c\m\7\p\f\y\i\z\h\u\v\z\9\t\2\2\v\7\g\z\9\n\t\8\7\h\3\5\e\g\t\d\r\f\b\h\3\i\2\u\t\u\e\3\k\e\i\m\1\2\v\c\g\w\l\a\6\b\k\m\i\a\o\4\3\p\u\a\z\g\6\u\2\o\p\0\3\o\j\x\q\5\u\t\n\4\l\3\a\b\k\2\y\c\1\o\s\o\6\7\p\8\a\u\n\v\i\o\s\5\v\r\c\t\p\9\k\r\l\6\u\d\u\w\8\c\9\z\g\i\4\9\a\w\j\b\q\7\j\0\d\a\5\r\n\o\o\1\e\w\3\4\e\e\f\y\k\o\b\b\t\l\2\9\u\9\u\k\4\2\1\a\n\i\p\5\y\w\4\k\2\6\4\l\u\g\7\t\m\b\h\d\p\q\3\t\c\l\b\p\i\s\y\s\9\b\e\z\s\9\u\5\s\7\w\g\k\3\w\q\j\h\3\k\t\8\l\f\5\f\k\z\p\g\4\r\5\c\g\g\7\a\v\r\3\9\t\b\g\l\f\2\y\b\u\7\j\g\w\e\7\p\n\s\7\b\z\0\0\f\b\8\6\v\1\n\f\9\u\q\g\v\f\1\y\d\w\2\q\l\6\s\w\y\9\n\3\e\m\a\r\8\x\5\t\6\p\k\1\x\8\i\x\m\x\t\5\z\t\y\o\e\i\1\1\3\o\r\9\y\k\j\3\z\g\c\m\h\v\e\0\3\j\d\s\0\d\a\9\t\0\n\y\l\l\9\f\7\t\k\n\2\t\j\u\y\1\9\e\s\3\y\9\9\x\v\r\9\b\l\5\d\q\i\2\4\6\w\z\f\7\a\n\w\7\j\w\4\2\9\c\p\u\2\f\b\r\s\c\8\j\4\l\y\f\i\y\q\d\9\e\f\h\0\1\l\p\f\9\k\i\l\g\y\x\p\n\5\g\3\h\f\t\f\m\p\u\r\a\o\a\n\z\j\n\c\a\r\d\g\t\h\5\e\6\5\p\1\e\d\r\k\e\c\o\p\5\u\w\m\h\7\t\c\z\o\p\2\s\k\l\s\6\a\y\b\5\e\t\x\m\4\a\c\j\r\h\4\3\z\g\b\4\i\r\j\5\w\e\e\z\j\a\5\b\p\2\w\x\3\j\0\f\z\v\q\i\r\a\p\8\v\s\0\a\r\3\m\w\2\z\v\t\5\6\x\h\l\0\p\3\l\e\2\0\b\i\h\j\5\8\5\o\1\2\t\c\v\r\o\6\6\l\v\h\q\1\g\x\l\3\w\i\w\y\g\6\r\b\t\x\9\0\6\x\e\3\g\f\8\t\7\t\9\y\0\3\t\h\9\1\c\9\d\m\t\5\g\w\4\a\4\x\3\r\k\o\6\d\0\l\5\6\f\w\z\y\t\a\d\d\d\h\7\i\3\o\0\b\6\l\o\p\8\t\g\3\p\y\g\n\h\k\1\l\g\c\x\k\y\5\l\w\d\i\v\d\b\7\4\8\n\2\u\4\c\7\3\a\s\g\3\l\s\g\e\j\1\b\q\a\4\c\i\2\y\d\e\c\w\l\2\y\q\l\o\7\q\m\c\6\w\1\g\i\d\4\h\l\o\b\o\2\n\1\x\q\h\a\q\h\w\n\y\s\i\n\7\b\2\k\d\l\c\9\z\d\t\a\0\z\x\3\j\z\k\2\y\o\k\k\5\q\7\8\x\e\p\g\k\g\z\g\y\a\4\0\l\4\z\4\j\v\k\a\w\s\h\n\v\n\n\s\1\g\w\p\x\u\9\j\e\e\l\6\k\r\a\o\p\5\3\4\4\y\z\y\m\x\t\a\v\s\s\7\b\8\5\b\6\p\7\l\j\y\6\j\1\k\i\8\i\u\2\t\7\3\9\0\j\7\z\o\x\m\c\d\3\m\j\k\x\w\o\o\u\q\s\j\t\d\a\i\5\p\t\2\e\q\h\9\t\u\c\o\4\6\5\o\q\e\4\o\t\m\a\q\4\m\3\q\0\v\q\w\f\q\h\d\9\e\y\q\e\9\e\m\t\y\j\n\r\x\j\r\j\5\s\s\s\5\2\m\9\b\q\7\r\d\d\i\4\f\7\2\t\g\6\k\5\8\r\x\c\b\3\3\k\2\x\m\v\1\0\d\u\5\f\y\m\y\z\0\d\g\n\y\j\g\e\3\s\2\g\l\j\f\g\m\4\3\0\b\q\r\4\d\u\7\d\r\q\y\t\s\u\y\n\1\d\2\5\m\4\t\d\b\l\b\b\1\7\y\g\n\l\v\1\w\r\f\o\i\n\g\z\4\9\9\y\j\z\7\k\9\2\l\n\f\u\o\7\9\u\p\v\h\y\g\r\h\e\5\6\4\n\r\n\z\x\g\h\e\m\u\n\0\i\u\m\g\w\x\l\0\4\t\0\a\7\b\3\6\k\h\7\f\b\y\d\u\9\3\v\e\k\d\5\l\b\i\4\y\m\r\c\o\1\u\5\w\9\o\r\a\k\k\r\n\o\y\a\q\g\j\2\d\r\c\a\1\3\n\m\y\u\h\o\7\v\e\b\g\6\d\5\v\u\p\f\6\u\v\y\b\5\n\1\t\5\v\d\a\l\0\3\d\p\g\i\0\g\d\b\i\2\b\a\a\l\i\e\g\5\m\w\t\b\0\w\f\3\8\4\h\e\1\t\s\b\4\c\v\m\7\q\u\6\e\8\t\e\d\4\i\3\e\s\0\n\3\m\m\3\z\e\8\b\m\b\u\5\w\x\t\d\b\w\9\x\s\7\a\k\k\h\b\r\n\q\6\a\1\2\k\h\3\d\y\f\a\2\f\p\t\x\2\t\d\e\5\3\d\f\x\y\0\m\4\9\m\2\b\8\u\b\v\x\e\k\8\i\e\v\1\d\k\2\k\x\w\t\n\k\5\v\z\8\t\r\a\b\6\v\a\v\e\f\9\5\y\c\5\c\t\t\2\l\g\d\u\d\h\e\2\g\3\x\s\p\y\5\q\o\v\o\p\x\c\l\n\6\p\m\t\r\9\u\c\1\y\9\n\k\s\u\d\h\1\k\h\x\u\2\v\j\1\c\h\e\k\l\3\i\2\i\f\9\8\3\v\9\5\n\m\6\d\c\6\0\m\n\g\e\l\0\d\7\r\4\n\f\t\b\m\m\m\2\b\y\u\d\i\p\y\c\9\v\j\9\k\d\5\u\t\7\s\5\w\h\g\h\v\d\s\6\a\3\7\e\t\s\2\l\3\g\z\z\c\g\4\m\m\e\5\5\6\a\2\z\f\0\6\g\5\9\b\k\a\3\k\t\3\x\r\k\x\v\r\v\0\0\h\q\i\u\x\1\l\z\m\e\j\8\g\f\6\d\t\2\s\r\h\i\6\e\w\e\d\j\o\7\3\c\i\a\j\n\2\g\i\b\a\b\z\4\m\s\m\1\b\c\z\2\j\a\f\p\8\n\b\w\t\f\k\2\p\p\7\5\x\6\6\j\d\0\4\h\3\g\m\g\q\k\q\5\m\y\b\s\w\5\4\e\y\i\v\4\z\9\v\x\4\1\b\8\y\t\y\u\a\q\h\n\u\p\k\m\t\2\t\1\j\0\i\h\r\p\3\v\p\d\k\n\n\m\r\n\6\2\z\6\1\g\f\e\k\x\y\p\d\y\k\j\s\3\z\8\x\1\5\v\c\o\f\d\v\4\2\c\q\8\p\0\0\i\v\z\b\w\k\j\s\k\5\g\v\a\e\g\m\m\o\h\d\g\x\4\k\k\b\p\9\e\i\c\5\8\p\i\h\8\f\6\n\r\z\o\t\v\f\g\z\t\5\w\l\1\s\w\e\1\6\f\d\h\k\z\q\v\u\j\8\p\0\u\0\7\t\k\a\q\x\a\7\y\4\f\8\m\u\2\u\1\2\n\4\0\o\e\n\c\5\1\f\s\j\2\q\1\v\b\9\c\b\x\q\2\o\l\c\i\k\8\z\5\h\2\d\j\l\u\r\v\o\6\k\j\d\3\v\1\5\t\n\7\y\3\g\f\b\o\q\6\t\t\k\1\b\e\o\5\p\b\q\r\2\a\5\v\v\7\z\e\r\k\s\l\z\a\r\p\i\t\z\y\1\v\v\h\5\6\c\q\u\i\3\y\i\9\q\t\6\m\x\2\b\p\r\m\8\c\e\q\w\w\z\u\n\y\r\d\c\y\3\7\h\c\o\c\2\0\w\j\w\c\o\z\e\o\d\l\4\4\c\j\r\z\5\0\h\3\q\d\u\g\w\d\n\1\6\l\3\h\o\b\l\o\t\4\a\t\v\x\p\3\1\l\f\z\5\g\j\8\m\c\n\y\y\i\a\h\p\g\t\5\e\a\b\a\j\g\x\z\k\d\5\d\e\b\w\v\r\y\0\w\v\z\2\4\w\4\g\6\a\e\f\i\d\v\f\0\r\k\n\p\p\j\d\o\y\z\s\m\r\b\h\b\t\e\g\s\o\2\s\a\q\d\6\j\6\n\1\a\d\e\o\v\6\0\b\h\e\3\5\t\b\1\j\f\h\f\t\u\x\5\9\d\8\f\y\t\w\r\x\g\8\x\h\j\a\v\7\5\a\q\u\g\w\4\l\g\4\v\s\w\0\8\6\i\w\o\c\g\x\9\0\8\c\m\m\2\w\3\2\l\w\b\4\a\7\o\f\i\g\f\n\r\p\1\n\d\d\w\p\b\u\r\p\2\y\a\y\h\f\k\e\0\d\a\j\x\7\i\i\o\w\u\g\t\u\x\r\m\a\3\8\s\u\1\w\u\y\5\k\t\m\x\l\a\3\v\t\j\y\o\g\3\v\p\l\x\q\w\9\r\f\c\v\h\i\l\o\m\s\l\0\s\q\y\n\p\b\s\1\m\i\q\m\l\z\3\a\g\4\j\k\0\w\k\j\w\f\d\y\4\4\i\s\0\c\s\n\2\e\9\k\2\t\f\l\p\w\y\m\8\w\w\7\2\t\k\c\p\s\0\q\p\e\f\m\s\p\p\o\8\f\m\s\8\2\l\c\j\9\m\4\3\a\z\1\z\4\c\x\3\k\4\q\h\2\o\g\c\x\9\n\f\2\l\w\h\7\q\0\q\m\u\5\f\g\7\2\w\c\t\h\e\y\i\6\f\4\w\8\6\l\p\q\l\t\u\u\k\h\u\0\r\x\4\n\e\r\3\l\f\o\f\3\9\3\3\i\c\7\s\5\w\6\b\g\s\1\4\8\v\k\y\y\f\5\p\d\n\l\r\z\c\m\z\b\s\u\8\7\6\u\4\x\u\q\o\j\c\6\5\5\n\9\b\7\p\s\3\6\8\n\l\c\c\j\v\a\y\u\9\1\n\q\w\p\0\9\1\8\u\c\g\m\2\k\s\c\d\c\7\4\e\n\x\7\s\9\5\v\j\k\g\0\b\p\n\w\k\o\t\1\k\n\6\0\o\2\p\t\x\u\d\7\2\s\5\w\c\j\8\9\r\y\r\h\0\7\3\g\9\e\1\j\i\5\9\a\n\h\b\b\j\9\2\k\l\h\h\7\q\8\7\5\6\e\w\o\t\8\o\o\k\a\w\b\5\y\q\b\f\w\u\6\r\g\b\c\n\k\s\9\5\h\q\h\o\f\6\8\i\c\3\0\o\4\t\9\l\t\8\m\y\p\k\w\3\k\y\0\r\4\1\q\i\1\8\8\f\0\4\v\s\k\r\g\j\d\g\0\s\u\n\u\s\w\9\r\c\g\m\k\7\5\x\u\2\8\f\k\e\j\z\i\e\8\p\k\h\n\f\k\0\h\s\9\8\l\x\m\9\y\w\c\k\n\n\y\e\1\b\a\8\c\6\e\e\g\q\3\q\g\3\m\x\2\o\f\h\0\9\i\7\n\k\y\b\8\g\4\p\3\j\c\f\l\g\8\0\z\h\6\v\8\a\j\h\8\0\m\p\i\8\b\r\n\l\3\n\b\7\h\8\q\f\j\4\9\6\p\0\3\f\l\a\a\f\3\5\5\t\l\6\9\o\a\u\n\c\d\4\4\k\d\z\w\y\h\x\p\z\u\8\o\l\9\1\y\j\o\2\b\t\v\h\g\w\2\x\8\t\w\x\9\v\m\z\2\3\1\5\m\o\d\1\1\n\j\f\r\t\a\7\8\v\j\h\l\c\n\w\b\z\u\4\7\b\d\6\f\m\m\m\a\d\5\a\l\0\v\6\2\z\g\o\p\h\l\e\8\t\h\9\k\x\k\r\l\p\z\z\k\h\z\s\f\s\e\a\a\u\4\m\t\e\8\5\3\8\w\0\i\n\y\u\k\3\s\0\2\7\7\i\8\4\u\p\5\k\h\i\8\d\5\l\a\5\5\q\b\8\p\8\j\m\d\3\7\f\e\e\0\q\c\4\q\8\6\5\t\c\q\2\9\x\h\t\t\g\9\n\c\t\v\6\c\6\w\7\b\v\d\f\l\8\l\k\3\a\z\c\j\5\y\w\7\j\z\x\m\7\a\c\d\r\z\8\s\k\0\s\q\k\j\s\4\3\y\5\z\3\g\7\2\e\f\7\l\h\5\b\m\j\o\s\p\g\i\q\y\i\q\8\3\6\u\8\2\0\r\6\3\g\w\u\u\u\s\4\p\e\a\c\l\a\u\a\w\6\n\5\l\c\f\5\m\o\e\1\4\e\8\c\x\h\x\x\7\q\g\f\v\a\1\x\w\y\q\i\u\2\k\r\x\y\p\9\o\q\i\y\6\q\3\a\t\4\2\i\7 ]] 00:06:24.467 00:06:24.468 real 0m1.108s 00:06:24.468 user 0m0.774s 00:06:24.468 sys 0m0.421s 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:24.468 ************************************ 00:06:24.468 END TEST dd_rw_offset 00:06:24.468 ************************************ 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.468 21:33:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.468 { 00:06:24.468 "subsystems": [ 00:06:24.468 { 00:06:24.468 "subsystem": "bdev", 00:06:24.468 "config": [ 00:06:24.468 { 00:06:24.468 "params": { 00:06:24.468 "trtype": "pcie", 00:06:24.468 "traddr": "0000:00:10.0", 00:06:24.468 "name": "Nvme0" 00:06:24.468 }, 00:06:24.468 "method": "bdev_nvme_attach_controller" 00:06:24.468 }, 00:06:24.468 { 00:06:24.468 "method": "bdev_wait_for_examine" 00:06:24.468 } 00:06:24.468 ] 00:06:24.468 } 00:06:24.468 ] 00:06:24.468 } 00:06:24.468 [2024-12-10 21:33:25.139284] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:24.468 [2024-12-10 21:33:25.139617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:06:24.723 [2024-12-10 21:33:25.287430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.723 [2024-12-10 21:33:25.328491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.723 [2024-12-10 21:33:25.363717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.723  [2024-12-10T21:33:25.767Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:24.985 00:06:24.985 21:33:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.985 ************************************ 00:06:24.985 END TEST spdk_dd_basic_rw 00:06:24.985 ************************************ 00:06:24.985 00:06:24.985 real 0m16.032s 00:06:24.985 user 0m11.839s 00:06:24.985 sys 0m4.909s 00:06:24.985 21:33:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.985 21:33:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.985 21:33:25 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:24.985 21:33:25 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.985 21:33:25 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.985 21:33:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:24.985 ************************************ 00:06:24.985 START TEST spdk_dd_posix 00:06:24.985 ************************************ 00:06:24.985 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:24.985 * Looking for test storage... 00:06:24.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:24.985 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.985 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.985 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.244 --rc genhtml_branch_coverage=1 00:06:25.244 --rc genhtml_function_coverage=1 00:06:25.244 --rc genhtml_legend=1 00:06:25.244 --rc geninfo_all_blocks=1 00:06:25.244 --rc geninfo_unexecuted_blocks=1 00:06:25.244 00:06:25.244 ' 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.244 --rc genhtml_branch_coverage=1 00:06:25.244 --rc genhtml_function_coverage=1 00:06:25.244 --rc genhtml_legend=1 00:06:25.244 --rc geninfo_all_blocks=1 00:06:25.244 --rc geninfo_unexecuted_blocks=1 00:06:25.244 00:06:25.244 ' 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.244 --rc genhtml_branch_coverage=1 00:06:25.244 --rc genhtml_function_coverage=1 00:06:25.244 --rc genhtml_legend=1 00:06:25.244 --rc geninfo_all_blocks=1 00:06:25.244 --rc geninfo_unexecuted_blocks=1 00:06:25.244 00:06:25.244 ' 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.244 --rc genhtml_branch_coverage=1 00:06:25.244 --rc genhtml_function_coverage=1 00:06:25.244 --rc genhtml_legend=1 00:06:25.244 --rc geninfo_all_blocks=1 00:06:25.244 --rc geninfo_unexecuted_blocks=1 00:06:25.244 00:06:25.244 ' 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.244 21:33:25 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:25.245 * First test run, liburing in use 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.245 ************************************ 00:06:25.245 START TEST dd_flag_append 00:06:25.245 ************************************ 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=921mxgm57ha5qvc4t1970bxhtknmh1lq 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=s5n9mlokf9emvj4r1unibwh5trtjtl2t 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 921mxgm57ha5qvc4t1970bxhtknmh1lq 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s s5n9mlokf9emvj4r1unibwh5trtjtl2t 00:06:25.245 21:33:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:25.245 [2024-12-10 21:33:25.934727] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:25.245 [2024-12-10 21:33:25.934860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60208 ] 00:06:25.503 [2024-12-10 21:33:26.087208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.503 [2024-12-10 21:33:26.128759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.503 [2024-12-10 21:33:26.165428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.503  [2024-12-10T21:33:26.545Z] Copying: 32/32 [B] (average 31 kBps) 00:06:25.762 00:06:25.762 ************************************ 00:06:25.762 END TEST dd_flag_append 00:06:25.762 ************************************ 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ s5n9mlokf9emvj4r1unibwh5trtjtl2t921mxgm57ha5qvc4t1970bxhtknmh1lq == \s\5\n\9\m\l\o\k\f\9\e\m\v\j\4\r\1\u\n\i\b\w\h\5\t\r\t\j\t\l\2\t\9\2\1\m\x\g\m\5\7\h\a\5\q\v\c\4\t\1\9\7\0\b\x\h\t\k\n\m\h\1\l\q ]] 00:06:25.762 00:06:25.762 real 0m0.473s 00:06:25.762 user 0m0.250s 00:06:25.762 sys 0m0.206s 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.762 ************************************ 00:06:25.762 START TEST dd_flag_directory 00:06:25.762 ************************************ 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.762 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.762 [2024-12-10 21:33:26.445917] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:25.762 [2024-12-10 21:33:26.446035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:06:26.020 [2024-12-10 21:33:26.596866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.020 [2024-12-10 21:33:26.643965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.020 [2024-12-10 21:33:26.677878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.020 [2024-12-10 21:33:26.698174] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.020 [2024-12-10 21:33:26.698236] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.020 [2024-12-10 21:33:26.698251] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.020 [2024-12-10 21:33:26.770087] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.279 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.280 21:33:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:26.280 [2024-12-10 21:33:26.878050] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:26.280 [2024-12-10 21:33:26.878144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60246 ] 00:06:26.280 [2024-12-10 21:33:27.029581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.538 [2024-12-10 21:33:27.071494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.538 [2024-12-10 21:33:27.103550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.538 [2024-12-10 21:33:27.125480] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.538 [2024-12-10 21:33:27.125552] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:26.538 [2024-12-10 21:33:27.125571] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.538 [2024-12-10 21:33:27.201146] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:26.538 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:26.538 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.538 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:26.538 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.539 00:06:26.539 real 0m0.891s 00:06:26.539 user 0m0.485s 00:06:26.539 sys 0m0.194s 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:26.539 ************************************ 00:06:26.539 END TEST dd_flag_directory 00:06:26.539 ************************************ 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.539 ************************************ 00:06:26.539 START TEST dd_flag_nofollow 00:06:26.539 ************************************ 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.539 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.798 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.798 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.798 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.798 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.798 [2024-12-10 21:33:27.386890] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:26.798 [2024-12-10 21:33:27.387030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60275 ] 00:06:26.798 [2024-12-10 21:33:27.537433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.056 [2024-12-10 21:33:27.586493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.056 [2024-12-10 21:33:27.622392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.056 [2024-12-10 21:33:27.647896] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:27.056 [2024-12-10 21:33:27.647983] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:27.056 [2024-12-10 21:33:27.648006] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.056 [2024-12-10 21:33:27.725303] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.056 21:33:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:27.313 [2024-12-10 21:33:27.853958] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:27.313 [2024-12-10 21:33:27.854057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60286 ] 00:06:27.313 [2024-12-10 21:33:27.996428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.313 [2024-12-10 21:33:28.044779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.313 [2024-12-10 21:33:28.080472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.573 [2024-12-10 21:33:28.104946] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:27.573 [2024-12-10 21:33:28.105024] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:27.573 [2024-12-10 21:33:28.105048] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.573 [2024-12-10 21:33:28.184633] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:27.573 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.573 [2024-12-10 21:33:28.318485] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:27.573 [2024-12-10 21:33:28.318615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60294 ] 00:06:27.831 [2024-12-10 21:33:28.467894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.831 [2024-12-10 21:33:28.516573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.831 [2024-12-10 21:33:28.547517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.831  [2024-12-10T21:33:28.873Z] Copying: 512/512 [B] (average 500 kBps) 00:06:28.090 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 5o5vxj9ta6gb1t4zzfuwai2sztlv3zmx8ss6qi96ahvq99saxfc6pyrkermlz0oc58xe4914ckrhndmq5jc984wc845f8lx7z0qtxg5ogjqefjtadbgsowvdk3bktwydpbo1apvtcjntp6ego3xr0exisas1kivvl7171zpy9iv54jjgai4u9tqefbis5p3mqwccoot210r0j3ssnaj5xz6mcscod7povkjroyhbc2nukftawoje6bdgb2on2t3fno02o3owl49ge5c4txttdzjqn8blraigxtjz56fyf1u9hqntmg9837arkn5epip2tdluch5z194myaw9i6ipuokzyudcggkhpix547pf3jft8n96pxjzcmrz4f47qvm69814oc734ps1x1h2yp80sisphdbhyddcbx52husk69lwj7z6u6d7e4iu5an282jmd70yum7cu7j91gzbxhnz0difcgqx7g0rg8y4us2bflsomd41hjev6632of8l5ixm == \5\o\5\v\x\j\9\t\a\6\g\b\1\t\4\z\z\f\u\w\a\i\2\s\z\t\l\v\3\z\m\x\8\s\s\6\q\i\9\6\a\h\v\q\9\9\s\a\x\f\c\6\p\y\r\k\e\r\m\l\z\0\o\c\5\8\x\e\4\9\1\4\c\k\r\h\n\d\m\q\5\j\c\9\8\4\w\c\8\4\5\f\8\l\x\7\z\0\q\t\x\g\5\o\g\j\q\e\f\j\t\a\d\b\g\s\o\w\v\d\k\3\b\k\t\w\y\d\p\b\o\1\a\p\v\t\c\j\n\t\p\6\e\g\o\3\x\r\0\e\x\i\s\a\s\1\k\i\v\v\l\7\1\7\1\z\p\y\9\i\v\5\4\j\j\g\a\i\4\u\9\t\q\e\f\b\i\s\5\p\3\m\q\w\c\c\o\o\t\2\1\0\r\0\j\3\s\s\n\a\j\5\x\z\6\m\c\s\c\o\d\7\p\o\v\k\j\r\o\y\h\b\c\2\n\u\k\f\t\a\w\o\j\e\6\b\d\g\b\2\o\n\2\t\3\f\n\o\0\2\o\3\o\w\l\4\9\g\e\5\c\4\t\x\t\t\d\z\j\q\n\8\b\l\r\a\i\g\x\t\j\z\5\6\f\y\f\1\u\9\h\q\n\t\m\g\9\8\3\7\a\r\k\n\5\e\p\i\p\2\t\d\l\u\c\h\5\z\1\9\4\m\y\a\w\9\i\6\i\p\u\o\k\z\y\u\d\c\g\g\k\h\p\i\x\5\4\7\p\f\3\j\f\t\8\n\9\6\p\x\j\z\c\m\r\z\4\f\4\7\q\v\m\6\9\8\1\4\o\c\7\3\4\p\s\1\x\1\h\2\y\p\8\0\s\i\s\p\h\d\b\h\y\d\d\c\b\x\5\2\h\u\s\k\6\9\l\w\j\7\z\6\u\6\d\7\e\4\i\u\5\a\n\2\8\2\j\m\d\7\0\y\u\m\7\c\u\7\j\9\1\g\z\b\x\h\n\z\0\d\i\f\c\g\q\x\7\g\0\r\g\8\y\4\u\s\2\b\f\l\s\o\m\d\4\1\h\j\e\v\6\6\3\2\o\f\8\l\5\i\x\m ]] 00:06:28.090 00:06:28.090 real 0m1.391s 00:06:28.090 user 0m0.749s 00:06:28.090 sys 0m0.403s 00:06:28.090 ************************************ 00:06:28.090 END TEST dd_flag_nofollow 00:06:28.090 ************************************ 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:28.090 ************************************ 00:06:28.090 START TEST dd_flag_noatime 00:06:28.090 ************************************ 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733866408 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733866408 00:06:28.090 21:33:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:29.073 21:33:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.073 [2024-12-10 21:33:29.819154] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:29.073 [2024-12-10 21:33:29.819291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60336 ] 00:06:29.331 [2024-12-10 21:33:29.970486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.331 [2024-12-10 21:33:30.006624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.331 [2024-12-10 21:33:30.038316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.331  [2024-12-10T21:33:30.372Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.589 00:06:29.589 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.589 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733866408 )) 00:06:29.589 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.589 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733866408 )) 00:06:29.589 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.589 [2024-12-10 21:33:30.241512] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:29.589 [2024-12-10 21:33:30.241601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60344 ] 00:06:29.847 [2024-12-10 21:33:30.383562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.847 [2024-12-10 21:33:30.418007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.847 [2024-12-10 21:33:30.449019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.847  [2024-12-10T21:33:30.630Z] Copying: 512/512 [B] (average 500 kBps) 00:06:29.847 00:06:29.847 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.847 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733866410 )) 00:06:29.847 00:06:29.847 real 0m1.863s 00:06:29.847 user 0m0.447s 00:06:29.847 sys 0m0.362s 00:06:29.847 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.847 21:33:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:29.847 ************************************ 00:06:29.847 END TEST dd_flag_noatime 00:06:29.847 ************************************ 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:30.105 ************************************ 00:06:30.105 START TEST dd_flags_misc 00:06:30.105 ************************************ 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.105 21:33:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:30.105 [2024-12-10 21:33:30.719677] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:30.105 [2024-12-10 21:33:30.719814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60377 ] 00:06:30.105 [2024-12-10 21:33:30.872650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.363 [2024-12-10 21:33:30.913477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.363 [2024-12-10 21:33:30.947952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.363  [2024-12-10T21:33:31.146Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.363 00:06:30.363 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 38glsaepkjpbx0vkwhiw1iwpgtwp7b3r6ztoftza96ptdqj5q26eb33ph711o6xposkigjrinyh2hoaed9nx3vs9d9ntt7zcvky9a6ltstrkxws350ajhcu6j4uqv5p6stdqqtiffi2e6qubalp81n8iwu1330i2jeromq29bv7etf9ow8es4azcjarh34nyb1mzs1q4my7378spulm4n2fwb09uxrq1ypfvxnceldf0eycr0hyeykozvaldq7i9giu1cmro6wk7vgm2o16nxs7ef6dq1x3er6ggi3pov9meuwd9j7ughplwkujuio81vtwlujgwf6htselvsq1ik3slm5c2gu98vh0j4olem9mi3z0w80w384csn4tt1fmkxannp8t1f23u0lv4g471hb9cmom9nq8fj8ydob1e7i0yyh3yj71bkvwi8zjf1zusy07kuqky3tkwdd9t809ftgj3m08xq5jkd386k7bwspdcokcis5sd20k1zs4d9690 == \3\8\g\l\s\a\e\p\k\j\p\b\x\0\v\k\w\h\i\w\1\i\w\p\g\t\w\p\7\b\3\r\6\z\t\o\f\t\z\a\9\6\p\t\d\q\j\5\q\2\6\e\b\3\3\p\h\7\1\1\o\6\x\p\o\s\k\i\g\j\r\i\n\y\h\2\h\o\a\e\d\9\n\x\3\v\s\9\d\9\n\t\t\7\z\c\v\k\y\9\a\6\l\t\s\t\r\k\x\w\s\3\5\0\a\j\h\c\u\6\j\4\u\q\v\5\p\6\s\t\d\q\q\t\i\f\f\i\2\e\6\q\u\b\a\l\p\8\1\n\8\i\w\u\1\3\3\0\i\2\j\e\r\o\m\q\2\9\b\v\7\e\t\f\9\o\w\8\e\s\4\a\z\c\j\a\r\h\3\4\n\y\b\1\m\z\s\1\q\4\m\y\7\3\7\8\s\p\u\l\m\4\n\2\f\w\b\0\9\u\x\r\q\1\y\p\f\v\x\n\c\e\l\d\f\0\e\y\c\r\0\h\y\e\y\k\o\z\v\a\l\d\q\7\i\9\g\i\u\1\c\m\r\o\6\w\k\7\v\g\m\2\o\1\6\n\x\s\7\e\f\6\d\q\1\x\3\e\r\6\g\g\i\3\p\o\v\9\m\e\u\w\d\9\j\7\u\g\h\p\l\w\k\u\j\u\i\o\8\1\v\t\w\l\u\j\g\w\f\6\h\t\s\e\l\v\s\q\1\i\k\3\s\l\m\5\c\2\g\u\9\8\v\h\0\j\4\o\l\e\m\9\m\i\3\z\0\w\8\0\w\3\8\4\c\s\n\4\t\t\1\f\m\k\x\a\n\n\p\8\t\1\f\2\3\u\0\l\v\4\g\4\7\1\h\b\9\c\m\o\m\9\n\q\8\f\j\8\y\d\o\b\1\e\7\i\0\y\y\h\3\y\j\7\1\b\k\v\w\i\8\z\j\f\1\z\u\s\y\0\7\k\u\q\k\y\3\t\k\w\d\d\9\t\8\0\9\f\t\g\j\3\m\0\8\x\q\5\j\k\d\3\8\6\k\7\b\w\s\p\d\c\o\k\c\i\s\5\s\d\2\0\k\1\z\s\4\d\9\6\9\0 ]] 00:06:30.363 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.363 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:30.622 [2024-12-10 21:33:31.157014] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:30.622 [2024-12-10 21:33:31.157108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:06:30.622 [2024-12-10 21:33:31.303016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.622 [2024-12-10 21:33:31.337016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.622 [2024-12-10 21:33:31.366363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.622  [2024-12-10T21:33:31.663Z] Copying: 512/512 [B] (average 500 kBps) 00:06:30.880 00:06:30.880 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 38glsaepkjpbx0vkwhiw1iwpgtwp7b3r6ztoftza96ptdqj5q26eb33ph711o6xposkigjrinyh2hoaed9nx3vs9d9ntt7zcvky9a6ltstrkxws350ajhcu6j4uqv5p6stdqqtiffi2e6qubalp81n8iwu1330i2jeromq29bv7etf9ow8es4azcjarh34nyb1mzs1q4my7378spulm4n2fwb09uxrq1ypfvxnceldf0eycr0hyeykozvaldq7i9giu1cmro6wk7vgm2o16nxs7ef6dq1x3er6ggi3pov9meuwd9j7ughplwkujuio81vtwlujgwf6htselvsq1ik3slm5c2gu98vh0j4olem9mi3z0w80w384csn4tt1fmkxannp8t1f23u0lv4g471hb9cmom9nq8fj8ydob1e7i0yyh3yj71bkvwi8zjf1zusy07kuqky3tkwdd9t809ftgj3m08xq5jkd386k7bwspdcokcis5sd20k1zs4d9690 == \3\8\g\l\s\a\e\p\k\j\p\b\x\0\v\k\w\h\i\w\1\i\w\p\g\t\w\p\7\b\3\r\6\z\t\o\f\t\z\a\9\6\p\t\d\q\j\5\q\2\6\e\b\3\3\p\h\7\1\1\o\6\x\p\o\s\k\i\g\j\r\i\n\y\h\2\h\o\a\e\d\9\n\x\3\v\s\9\d\9\n\t\t\7\z\c\v\k\y\9\a\6\l\t\s\t\r\k\x\w\s\3\5\0\a\j\h\c\u\6\j\4\u\q\v\5\p\6\s\t\d\q\q\t\i\f\f\i\2\e\6\q\u\b\a\l\p\8\1\n\8\i\w\u\1\3\3\0\i\2\j\e\r\o\m\q\2\9\b\v\7\e\t\f\9\o\w\8\e\s\4\a\z\c\j\a\r\h\3\4\n\y\b\1\m\z\s\1\q\4\m\y\7\3\7\8\s\p\u\l\m\4\n\2\f\w\b\0\9\u\x\r\q\1\y\p\f\v\x\n\c\e\l\d\f\0\e\y\c\r\0\h\y\e\y\k\o\z\v\a\l\d\q\7\i\9\g\i\u\1\c\m\r\o\6\w\k\7\v\g\m\2\o\1\6\n\x\s\7\e\f\6\d\q\1\x\3\e\r\6\g\g\i\3\p\o\v\9\m\e\u\w\d\9\j\7\u\g\h\p\l\w\k\u\j\u\i\o\8\1\v\t\w\l\u\j\g\w\f\6\h\t\s\e\l\v\s\q\1\i\k\3\s\l\m\5\c\2\g\u\9\8\v\h\0\j\4\o\l\e\m\9\m\i\3\z\0\w\8\0\w\3\8\4\c\s\n\4\t\t\1\f\m\k\x\a\n\n\p\8\t\1\f\2\3\u\0\l\v\4\g\4\7\1\h\b\9\c\m\o\m\9\n\q\8\f\j\8\y\d\o\b\1\e\7\i\0\y\y\h\3\y\j\7\1\b\k\v\w\i\8\z\j\f\1\z\u\s\y\0\7\k\u\q\k\y\3\t\k\w\d\d\9\t\8\0\9\f\t\g\j\3\m\0\8\x\q\5\j\k\d\3\8\6\k\7\b\w\s\p\d\c\o\k\c\i\s\5\s\d\2\0\k\1\z\s\4\d\9\6\9\0 ]] 00:06:30.880 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.880 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:30.880 [2024-12-10 21:33:31.561648] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:30.880 [2024-12-10 21:33:31.561770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60392 ] 00:06:31.138 [2024-12-10 21:33:31.707267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.138 [2024-12-10 21:33:31.741472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.138 [2024-12-10 21:33:31.772395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.138  [2024-12-10T21:33:31.921Z] Copying: 512/512 [B] (average 166 kBps) 00:06:31.138 00:06:31.396 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 38glsaepkjpbx0vkwhiw1iwpgtwp7b3r6ztoftza96ptdqj5q26eb33ph711o6xposkigjrinyh2hoaed9nx3vs9d9ntt7zcvky9a6ltstrkxws350ajhcu6j4uqv5p6stdqqtiffi2e6qubalp81n8iwu1330i2jeromq29bv7etf9ow8es4azcjarh34nyb1mzs1q4my7378spulm4n2fwb09uxrq1ypfvxnceldf0eycr0hyeykozvaldq7i9giu1cmro6wk7vgm2o16nxs7ef6dq1x3er6ggi3pov9meuwd9j7ughplwkujuio81vtwlujgwf6htselvsq1ik3slm5c2gu98vh0j4olem9mi3z0w80w384csn4tt1fmkxannp8t1f23u0lv4g471hb9cmom9nq8fj8ydob1e7i0yyh3yj71bkvwi8zjf1zusy07kuqky3tkwdd9t809ftgj3m08xq5jkd386k7bwspdcokcis5sd20k1zs4d9690 == \3\8\g\l\s\a\e\p\k\j\p\b\x\0\v\k\w\h\i\w\1\i\w\p\g\t\w\p\7\b\3\r\6\z\t\o\f\t\z\a\9\6\p\t\d\q\j\5\q\2\6\e\b\3\3\p\h\7\1\1\o\6\x\p\o\s\k\i\g\j\r\i\n\y\h\2\h\o\a\e\d\9\n\x\3\v\s\9\d\9\n\t\t\7\z\c\v\k\y\9\a\6\l\t\s\t\r\k\x\w\s\3\5\0\a\j\h\c\u\6\j\4\u\q\v\5\p\6\s\t\d\q\q\t\i\f\f\i\2\e\6\q\u\b\a\l\p\8\1\n\8\i\w\u\1\3\3\0\i\2\j\e\r\o\m\q\2\9\b\v\7\e\t\f\9\o\w\8\e\s\4\a\z\c\j\a\r\h\3\4\n\y\b\1\m\z\s\1\q\4\m\y\7\3\7\8\s\p\u\l\m\4\n\2\f\w\b\0\9\u\x\r\q\1\y\p\f\v\x\n\c\e\l\d\f\0\e\y\c\r\0\h\y\e\y\k\o\z\v\a\l\d\q\7\i\9\g\i\u\1\c\m\r\o\6\w\k\7\v\g\m\2\o\1\6\n\x\s\7\e\f\6\d\q\1\x\3\e\r\6\g\g\i\3\p\o\v\9\m\e\u\w\d\9\j\7\u\g\h\p\l\w\k\u\j\u\i\o\8\1\v\t\w\l\u\j\g\w\f\6\h\t\s\e\l\v\s\q\1\i\k\3\s\l\m\5\c\2\g\u\9\8\v\h\0\j\4\o\l\e\m\9\m\i\3\z\0\w\8\0\w\3\8\4\c\s\n\4\t\t\1\f\m\k\x\a\n\n\p\8\t\1\f\2\3\u\0\l\v\4\g\4\7\1\h\b\9\c\m\o\m\9\n\q\8\f\j\8\y\d\o\b\1\e\7\i\0\y\y\h\3\y\j\7\1\b\k\v\w\i\8\z\j\f\1\z\u\s\y\0\7\k\u\q\k\y\3\t\k\w\d\d\9\t\8\0\9\f\t\g\j\3\m\0\8\x\q\5\j\k\d\3\8\6\k\7\b\w\s\p\d\c\o\k\c\i\s\5\s\d\2\0\k\1\z\s\4\d\9\6\9\0 ]] 00:06:31.396 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.396 21:33:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:31.396 [2024-12-10 21:33:31.973329] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:31.396 [2024-12-10 21:33:31.973495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:06:31.396 [2024-12-10 21:33:32.128012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.396 [2024-12-10 21:33:32.162929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.655 [2024-12-10 21:33:32.193703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.655  [2024-12-10T21:33:32.438Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.655 00:06:31.655 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 38glsaepkjpbx0vkwhiw1iwpgtwp7b3r6ztoftza96ptdqj5q26eb33ph711o6xposkigjrinyh2hoaed9nx3vs9d9ntt7zcvky9a6ltstrkxws350ajhcu6j4uqv5p6stdqqtiffi2e6qubalp81n8iwu1330i2jeromq29bv7etf9ow8es4azcjarh34nyb1mzs1q4my7378spulm4n2fwb09uxrq1ypfvxnceldf0eycr0hyeykozvaldq7i9giu1cmro6wk7vgm2o16nxs7ef6dq1x3er6ggi3pov9meuwd9j7ughplwkujuio81vtwlujgwf6htselvsq1ik3slm5c2gu98vh0j4olem9mi3z0w80w384csn4tt1fmkxannp8t1f23u0lv4g471hb9cmom9nq8fj8ydob1e7i0yyh3yj71bkvwi8zjf1zusy07kuqky3tkwdd9t809ftgj3m08xq5jkd386k7bwspdcokcis5sd20k1zs4d9690 == \3\8\g\l\s\a\e\p\k\j\p\b\x\0\v\k\w\h\i\w\1\i\w\p\g\t\w\p\7\b\3\r\6\z\t\o\f\t\z\a\9\6\p\t\d\q\j\5\q\2\6\e\b\3\3\p\h\7\1\1\o\6\x\p\o\s\k\i\g\j\r\i\n\y\h\2\h\o\a\e\d\9\n\x\3\v\s\9\d\9\n\t\t\7\z\c\v\k\y\9\a\6\l\t\s\t\r\k\x\w\s\3\5\0\a\j\h\c\u\6\j\4\u\q\v\5\p\6\s\t\d\q\q\t\i\f\f\i\2\e\6\q\u\b\a\l\p\8\1\n\8\i\w\u\1\3\3\0\i\2\j\e\r\o\m\q\2\9\b\v\7\e\t\f\9\o\w\8\e\s\4\a\z\c\j\a\r\h\3\4\n\y\b\1\m\z\s\1\q\4\m\y\7\3\7\8\s\p\u\l\m\4\n\2\f\w\b\0\9\u\x\r\q\1\y\p\f\v\x\n\c\e\l\d\f\0\e\y\c\r\0\h\y\e\y\k\o\z\v\a\l\d\q\7\i\9\g\i\u\1\c\m\r\o\6\w\k\7\v\g\m\2\o\1\6\n\x\s\7\e\f\6\d\q\1\x\3\e\r\6\g\g\i\3\p\o\v\9\m\e\u\w\d\9\j\7\u\g\h\p\l\w\k\u\j\u\i\o\8\1\v\t\w\l\u\j\g\w\f\6\h\t\s\e\l\v\s\q\1\i\k\3\s\l\m\5\c\2\g\u\9\8\v\h\0\j\4\o\l\e\m\9\m\i\3\z\0\w\8\0\w\3\8\4\c\s\n\4\t\t\1\f\m\k\x\a\n\n\p\8\t\1\f\2\3\u\0\l\v\4\g\4\7\1\h\b\9\c\m\o\m\9\n\q\8\f\j\8\y\d\o\b\1\e\7\i\0\y\y\h\3\y\j\7\1\b\k\v\w\i\8\z\j\f\1\z\u\s\y\0\7\k\u\q\k\y\3\t\k\w\d\d\9\t\8\0\9\f\t\g\j\3\m\0\8\x\q\5\j\k\d\3\8\6\k\7\b\w\s\p\d\c\o\k\c\i\s\5\s\d\2\0\k\1\z\s\4\d\9\6\9\0 ]] 00:06:31.655 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:31.655 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:31.655 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:31.655 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:31.655 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.655 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:31.655 [2024-12-10 21:33:32.404475] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:31.655 [2024-12-10 21:33:32.404579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60411 ] 00:06:31.914 [2024-12-10 21:33:32.547789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.914 [2024-12-10 21:33:32.581189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.914 [2024-12-10 21:33:32.610850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.914  [2024-12-10T21:33:32.955Z] Copying: 512/512 [B] (average 500 kBps) 00:06:32.172 00:06:32.172 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9b5k5v8q0ztfrejntekkgftl5pggqxm8kocu2btytpx1ri77vagrc2s3htfp5wvrdjqzl3i7geog8c1y2sodwpswpz2j0fwz6us013jkiazkhniog72ut26b5pg230ug9nfvzj6tr8o5aow80u3oved1zzq8k49y0cyxllfxalu7w29mxxautfj3ycxtanvb26lzok0vpgghn4wg9jdzy3r2vjpnuq37kzq9jg4v42equic7zqv2hr1m1xxkw64jqfttbereuysuyzn1o5xdks6wz1h8mlob4sp9rj02kpp848ynng56222k3ifls1pw1etocxfdwfd7na1p1hho2vy7gup5ok07caoass3q9igqkdvo5ns1rg6bd7o0wiolymw66vsczin7zbmrd4cf5fcma7zo913u0qojhwyzqpvh3x7it5g36qf8xvt9gmqwdz8ora7u70pgcvbxtwwbfzs5kgfkzueucltivfpk1akthqxtoznwzuhdk3a50o2w == \9\b\5\k\5\v\8\q\0\z\t\f\r\e\j\n\t\e\k\k\g\f\t\l\5\p\g\g\q\x\m\8\k\o\c\u\2\b\t\y\t\p\x\1\r\i\7\7\v\a\g\r\c\2\s\3\h\t\f\p\5\w\v\r\d\j\q\z\l\3\i\7\g\e\o\g\8\c\1\y\2\s\o\d\w\p\s\w\p\z\2\j\0\f\w\z\6\u\s\0\1\3\j\k\i\a\z\k\h\n\i\o\g\7\2\u\t\2\6\b\5\p\g\2\3\0\u\g\9\n\f\v\z\j\6\t\r\8\o\5\a\o\w\8\0\u\3\o\v\e\d\1\z\z\q\8\k\4\9\y\0\c\y\x\l\l\f\x\a\l\u\7\w\2\9\m\x\x\a\u\t\f\j\3\y\c\x\t\a\n\v\b\2\6\l\z\o\k\0\v\p\g\g\h\n\4\w\g\9\j\d\z\y\3\r\2\v\j\p\n\u\q\3\7\k\z\q\9\j\g\4\v\4\2\e\q\u\i\c\7\z\q\v\2\h\r\1\m\1\x\x\k\w\6\4\j\q\f\t\t\b\e\r\e\u\y\s\u\y\z\n\1\o\5\x\d\k\s\6\w\z\1\h\8\m\l\o\b\4\s\p\9\r\j\0\2\k\p\p\8\4\8\y\n\n\g\5\6\2\2\2\k\3\i\f\l\s\1\p\w\1\e\t\o\c\x\f\d\w\f\d\7\n\a\1\p\1\h\h\o\2\v\y\7\g\u\p\5\o\k\0\7\c\a\o\a\s\s\3\q\9\i\g\q\k\d\v\o\5\n\s\1\r\g\6\b\d\7\o\0\w\i\o\l\y\m\w\6\6\v\s\c\z\i\n\7\z\b\m\r\d\4\c\f\5\f\c\m\a\7\z\o\9\1\3\u\0\q\o\j\h\w\y\z\q\p\v\h\3\x\7\i\t\5\g\3\6\q\f\8\x\v\t\9\g\m\q\w\d\z\8\o\r\a\7\u\7\0\p\g\c\v\b\x\t\w\w\b\f\z\s\5\k\g\f\k\z\u\e\u\c\l\t\i\v\f\p\k\1\a\k\t\h\q\x\t\o\z\n\w\z\u\h\d\k\3\a\5\0\o\2\w ]] 00:06:32.172 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.172 21:33:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:32.172 [2024-12-10 21:33:32.800013] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:32.172 [2024-12-10 21:33:32.800113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60419 ] 00:06:32.172 [2024-12-10 21:33:32.940838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.431 [2024-12-10 21:33:32.974496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.431 [2024-12-10 21:33:33.004616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.431  [2024-12-10T21:33:33.214Z] Copying: 512/512 [B] (average 500 kBps) 00:06:32.431 00:06:32.431 21:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9b5k5v8q0ztfrejntekkgftl5pggqxm8kocu2btytpx1ri77vagrc2s3htfp5wvrdjqzl3i7geog8c1y2sodwpswpz2j0fwz6us013jkiazkhniog72ut26b5pg230ug9nfvzj6tr8o5aow80u3oved1zzq8k49y0cyxllfxalu7w29mxxautfj3ycxtanvb26lzok0vpgghn4wg9jdzy3r2vjpnuq37kzq9jg4v42equic7zqv2hr1m1xxkw64jqfttbereuysuyzn1o5xdks6wz1h8mlob4sp9rj02kpp848ynng56222k3ifls1pw1etocxfdwfd7na1p1hho2vy7gup5ok07caoass3q9igqkdvo5ns1rg6bd7o0wiolymw66vsczin7zbmrd4cf5fcma7zo913u0qojhwyzqpvh3x7it5g36qf8xvt9gmqwdz8ora7u70pgcvbxtwwbfzs5kgfkzueucltivfpk1akthqxtoznwzuhdk3a50o2w == \9\b\5\k\5\v\8\q\0\z\t\f\r\e\j\n\t\e\k\k\g\f\t\l\5\p\g\g\q\x\m\8\k\o\c\u\2\b\t\y\t\p\x\1\r\i\7\7\v\a\g\r\c\2\s\3\h\t\f\p\5\w\v\r\d\j\q\z\l\3\i\7\g\e\o\g\8\c\1\y\2\s\o\d\w\p\s\w\p\z\2\j\0\f\w\z\6\u\s\0\1\3\j\k\i\a\z\k\h\n\i\o\g\7\2\u\t\2\6\b\5\p\g\2\3\0\u\g\9\n\f\v\z\j\6\t\r\8\o\5\a\o\w\8\0\u\3\o\v\e\d\1\z\z\q\8\k\4\9\y\0\c\y\x\l\l\f\x\a\l\u\7\w\2\9\m\x\x\a\u\t\f\j\3\y\c\x\t\a\n\v\b\2\6\l\z\o\k\0\v\p\g\g\h\n\4\w\g\9\j\d\z\y\3\r\2\v\j\p\n\u\q\3\7\k\z\q\9\j\g\4\v\4\2\e\q\u\i\c\7\z\q\v\2\h\r\1\m\1\x\x\k\w\6\4\j\q\f\t\t\b\e\r\e\u\y\s\u\y\z\n\1\o\5\x\d\k\s\6\w\z\1\h\8\m\l\o\b\4\s\p\9\r\j\0\2\k\p\p\8\4\8\y\n\n\g\5\6\2\2\2\k\3\i\f\l\s\1\p\w\1\e\t\o\c\x\f\d\w\f\d\7\n\a\1\p\1\h\h\o\2\v\y\7\g\u\p\5\o\k\0\7\c\a\o\a\s\s\3\q\9\i\g\q\k\d\v\o\5\n\s\1\r\g\6\b\d\7\o\0\w\i\o\l\y\m\w\6\6\v\s\c\z\i\n\7\z\b\m\r\d\4\c\f\5\f\c\m\a\7\z\o\9\1\3\u\0\q\o\j\h\w\y\z\q\p\v\h\3\x\7\i\t\5\g\3\6\q\f\8\x\v\t\9\g\m\q\w\d\z\8\o\r\a\7\u\7\0\p\g\c\v\b\x\t\w\w\b\f\z\s\5\k\g\f\k\z\u\e\u\c\l\t\i\v\f\p\k\1\a\k\t\h\q\x\t\o\z\n\w\z\u\h\d\k\3\a\5\0\o\2\w ]] 00:06:32.431 21:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.431 21:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:32.689 [2024-12-10 21:33:33.214139] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:32.689 [2024-12-10 21:33:33.214274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60424 ] 00:06:32.689 [2024-12-10 21:33:33.363653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.689 [2024-12-10 21:33:33.414109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.689 [2024-12-10 21:33:33.450178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.947  [2024-12-10T21:33:33.730Z] Copying: 512/512 [B] (average 250 kBps) 00:06:32.947 00:06:32.947 21:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9b5k5v8q0ztfrejntekkgftl5pggqxm8kocu2btytpx1ri77vagrc2s3htfp5wvrdjqzl3i7geog8c1y2sodwpswpz2j0fwz6us013jkiazkhniog72ut26b5pg230ug9nfvzj6tr8o5aow80u3oved1zzq8k49y0cyxllfxalu7w29mxxautfj3ycxtanvb26lzok0vpgghn4wg9jdzy3r2vjpnuq37kzq9jg4v42equic7zqv2hr1m1xxkw64jqfttbereuysuyzn1o5xdks6wz1h8mlob4sp9rj02kpp848ynng56222k3ifls1pw1etocxfdwfd7na1p1hho2vy7gup5ok07caoass3q9igqkdvo5ns1rg6bd7o0wiolymw66vsczin7zbmrd4cf5fcma7zo913u0qojhwyzqpvh3x7it5g36qf8xvt9gmqwdz8ora7u70pgcvbxtwwbfzs5kgfkzueucltivfpk1akthqxtoznwzuhdk3a50o2w == \9\b\5\k\5\v\8\q\0\z\t\f\r\e\j\n\t\e\k\k\g\f\t\l\5\p\g\g\q\x\m\8\k\o\c\u\2\b\t\y\t\p\x\1\r\i\7\7\v\a\g\r\c\2\s\3\h\t\f\p\5\w\v\r\d\j\q\z\l\3\i\7\g\e\o\g\8\c\1\y\2\s\o\d\w\p\s\w\p\z\2\j\0\f\w\z\6\u\s\0\1\3\j\k\i\a\z\k\h\n\i\o\g\7\2\u\t\2\6\b\5\p\g\2\3\0\u\g\9\n\f\v\z\j\6\t\r\8\o\5\a\o\w\8\0\u\3\o\v\e\d\1\z\z\q\8\k\4\9\y\0\c\y\x\l\l\f\x\a\l\u\7\w\2\9\m\x\x\a\u\t\f\j\3\y\c\x\t\a\n\v\b\2\6\l\z\o\k\0\v\p\g\g\h\n\4\w\g\9\j\d\z\y\3\r\2\v\j\p\n\u\q\3\7\k\z\q\9\j\g\4\v\4\2\e\q\u\i\c\7\z\q\v\2\h\r\1\m\1\x\x\k\w\6\4\j\q\f\t\t\b\e\r\e\u\y\s\u\y\z\n\1\o\5\x\d\k\s\6\w\z\1\h\8\m\l\o\b\4\s\p\9\r\j\0\2\k\p\p\8\4\8\y\n\n\g\5\6\2\2\2\k\3\i\f\l\s\1\p\w\1\e\t\o\c\x\f\d\w\f\d\7\n\a\1\p\1\h\h\o\2\v\y\7\g\u\p\5\o\k\0\7\c\a\o\a\s\s\3\q\9\i\g\q\k\d\v\o\5\n\s\1\r\g\6\b\d\7\o\0\w\i\o\l\y\m\w\6\6\v\s\c\z\i\n\7\z\b\m\r\d\4\c\f\5\f\c\m\a\7\z\o\9\1\3\u\0\q\o\j\h\w\y\z\q\p\v\h\3\x\7\i\t\5\g\3\6\q\f\8\x\v\t\9\g\m\q\w\d\z\8\o\r\a\7\u\7\0\p\g\c\v\b\x\t\w\w\b\f\z\s\5\k\g\f\k\z\u\e\u\c\l\t\i\v\f\p\k\1\a\k\t\h\q\x\t\o\z\n\w\z\u\h\d\k\3\a\5\0\o\2\w ]] 00:06:32.947 21:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.947 21:33:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:32.947 [2024-12-10 21:33:33.661825] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:32.947 [2024-12-10 21:33:33.661944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60434 ] 00:06:33.205 [2024-12-10 21:33:33.809082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.205 [2024-12-10 21:33:33.851063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.205 [2024-12-10 21:33:33.884553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.205  [2024-12-10T21:33:34.246Z] Copying: 512/512 [B] (average 250 kBps) 00:06:33.463 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9b5k5v8q0ztfrejntekkgftl5pggqxm8kocu2btytpx1ri77vagrc2s3htfp5wvrdjqzl3i7geog8c1y2sodwpswpz2j0fwz6us013jkiazkhniog72ut26b5pg230ug9nfvzj6tr8o5aow80u3oved1zzq8k49y0cyxllfxalu7w29mxxautfj3ycxtanvb26lzok0vpgghn4wg9jdzy3r2vjpnuq37kzq9jg4v42equic7zqv2hr1m1xxkw64jqfttbereuysuyzn1o5xdks6wz1h8mlob4sp9rj02kpp848ynng56222k3ifls1pw1etocxfdwfd7na1p1hho2vy7gup5ok07caoass3q9igqkdvo5ns1rg6bd7o0wiolymw66vsczin7zbmrd4cf5fcma7zo913u0qojhwyzqpvh3x7it5g36qf8xvt9gmqwdz8ora7u70pgcvbxtwwbfzs5kgfkzueucltivfpk1akthqxtoznwzuhdk3a50o2w == \9\b\5\k\5\v\8\q\0\z\t\f\r\e\j\n\t\e\k\k\g\f\t\l\5\p\g\g\q\x\m\8\k\o\c\u\2\b\t\y\t\p\x\1\r\i\7\7\v\a\g\r\c\2\s\3\h\t\f\p\5\w\v\r\d\j\q\z\l\3\i\7\g\e\o\g\8\c\1\y\2\s\o\d\w\p\s\w\p\z\2\j\0\f\w\z\6\u\s\0\1\3\j\k\i\a\z\k\h\n\i\o\g\7\2\u\t\2\6\b\5\p\g\2\3\0\u\g\9\n\f\v\z\j\6\t\r\8\o\5\a\o\w\8\0\u\3\o\v\e\d\1\z\z\q\8\k\4\9\y\0\c\y\x\l\l\f\x\a\l\u\7\w\2\9\m\x\x\a\u\t\f\j\3\y\c\x\t\a\n\v\b\2\6\l\z\o\k\0\v\p\g\g\h\n\4\w\g\9\j\d\z\y\3\r\2\v\j\p\n\u\q\3\7\k\z\q\9\j\g\4\v\4\2\e\q\u\i\c\7\z\q\v\2\h\r\1\m\1\x\x\k\w\6\4\j\q\f\t\t\b\e\r\e\u\y\s\u\y\z\n\1\o\5\x\d\k\s\6\w\z\1\h\8\m\l\o\b\4\s\p\9\r\j\0\2\k\p\p\8\4\8\y\n\n\g\5\6\2\2\2\k\3\i\f\l\s\1\p\w\1\e\t\o\c\x\f\d\w\f\d\7\n\a\1\p\1\h\h\o\2\v\y\7\g\u\p\5\o\k\0\7\c\a\o\a\s\s\3\q\9\i\g\q\k\d\v\o\5\n\s\1\r\g\6\b\d\7\o\0\w\i\o\l\y\m\w\6\6\v\s\c\z\i\n\7\z\b\m\r\d\4\c\f\5\f\c\m\a\7\z\o\9\1\3\u\0\q\o\j\h\w\y\z\q\p\v\h\3\x\7\i\t\5\g\3\6\q\f\8\x\v\t\9\g\m\q\w\d\z\8\o\r\a\7\u\7\0\p\g\c\v\b\x\t\w\w\b\f\z\s\5\k\g\f\k\z\u\e\u\c\l\t\i\v\f\p\k\1\a\k\t\h\q\x\t\o\z\n\w\z\u\h\d\k\3\a\5\0\o\2\w ]] 00:06:33.463 ************************************ 00:06:33.463 END TEST dd_flags_misc 00:06:33.463 ************************************ 00:06:33.463 00:06:33.463 real 0m3.405s 00:06:33.463 user 0m1.785s 00:06:33.463 sys 0m1.444s 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:33.463 * Second test run, disabling liburing, forcing AIO 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.463 ************************************ 00:06:33.463 START TEST dd_flag_append_forced_aio 00:06:33.463 ************************************ 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=mape2ixws321tcw2te73dpzsn2bo0fh1 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=82gj7bdxv93jwp4otley00m937ii8eko 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s mape2ixws321tcw2te73dpzsn2bo0fh1 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 82gj7bdxv93jwp4otley00m937ii8eko 00:06:33.463 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:33.463 [2024-12-10 21:33:34.150379] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:33.463 [2024-12-10 21:33:34.150525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:06:33.722 [2024-12-10 21:33:34.292913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.722 [2024-12-10 21:33:34.342395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.722 [2024-12-10 21:33:34.377043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.722  [2024-12-10T21:33:34.803Z] Copying: 32/32 [B] (average 31 kBps) 00:06:34.020 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 82gj7bdxv93jwp4otley00m937ii8ekomape2ixws321tcw2te73dpzsn2bo0fh1 == \8\2\g\j\7\b\d\x\v\9\3\j\w\p\4\o\t\l\e\y\0\0\m\9\3\7\i\i\8\e\k\o\m\a\p\e\2\i\x\w\s\3\2\1\t\c\w\2\t\e\7\3\d\p\z\s\n\2\b\o\0\f\h\1 ]] 00:06:34.020 00:06:34.020 real 0m0.458s 00:06:34.020 user 0m0.236s 00:06:34.020 sys 0m0.098s 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.020 ************************************ 00:06:34.020 END TEST dd_flag_append_forced_aio 00:06:34.020 ************************************ 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.020 ************************************ 00:06:34.020 START TEST dd_flag_directory_forced_aio 00:06:34.020 ************************************ 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.020 21:33:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.020 [2024-12-10 21:33:34.644865] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:34.020 [2024-12-10 21:33:34.644961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60489 ] 00:06:34.020 [2024-12-10 21:33:34.785417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.279 [2024-12-10 21:33:34.820875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.279 [2024-12-10 21:33:34.850048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.279 [2024-12-10 21:33:34.869695] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.279 [2024-12-10 21:33:34.869750] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.279 [2024-12-10 21:33:34.869765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.279 [2024-12-10 21:33:34.937305] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.279 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.537 [2024-12-10 21:33:35.063741] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:34.537 [2024-12-10 21:33:35.063834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60498 ] 00:06:34.537 [2024-12-10 21:33:35.207702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.537 [2024-12-10 21:33:35.241081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.537 [2024-12-10 21:33:35.272150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.537 [2024-12-10 21:33:35.293000] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.537 [2024-12-10 21:33:35.293083] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.537 [2024-12-10 21:33:35.293106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.795 [2024-12-10 21:33:35.362914] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.795 00:06:34.795 real 0m0.833s 00:06:34.795 user 0m0.420s 00:06:34.795 sys 0m0.202s 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.795 ************************************ 00:06:34.795 END TEST dd_flag_directory_forced_aio 00:06:34.795 ************************************ 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.795 ************************************ 00:06:34.795 START TEST dd_flag_nofollow_forced_aio 00:06:34.795 ************************************ 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.795 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.795 [2024-12-10 21:33:35.537415] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:34.795 [2024-12-10 21:33:35.537549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60528 ] 00:06:35.054 [2024-12-10 21:33:35.685568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.054 [2024-12-10 21:33:35.725182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.054 [2024-12-10 21:33:35.758041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.054 [2024-12-10 21:33:35.780028] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.054 [2024-12-10 21:33:35.780105] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.054 [2024-12-10 21:33:35.780124] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.314 [2024-12-10 21:33:35.853699] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.314 21:33:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.314 [2024-12-10 21:33:35.983634] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:35.314 [2024-12-10 21:33:35.983750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60536 ] 00:06:35.572 [2024-12-10 21:33:36.132702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.572 [2024-12-10 21:33:36.166246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.572 [2024-12-10 21:33:36.195861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.572 [2024-12-10 21:33:36.215481] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.572 [2024-12-10 21:33:36.215549] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.572 [2024-12-10 21:33:36.215573] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.572 [2024-12-10 21:33:36.288087] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:35.830 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.830 [2024-12-10 21:33:36.412131] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:35.830 [2024-12-10 21:33:36.412231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60544 ] 00:06:35.830 [2024-12-10 21:33:36.553558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.830 [2024-12-10 21:33:36.587396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.089 [2024-12-10 21:33:36.619089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.089  [2024-12-10T21:33:36.872Z] Copying: 512/512 [B] (average 500 kBps) 00:06:36.089 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ew56xbbnst6gtrw2wd7633gec6soyt12fnqt758cw2sjugbf6ubxugfk4712ku1jofm44cf7ouqkaxqiz7hdihwji31olypuepzs494jfqcrq625l5l1kee1j2kbc7vut325qcypxi8tp5a6rr2ceadacmodjt9ttgx6p0ucdxh6t7ipt275htdb5id4crhjmvxozz8wqbe90two7trgnq0lho5hks2bog0hptlok6dyeru16d22ko07cj6k6jkl8gt2fs0c255y0kux0jm1fc826oxuw99u8iusludf0k33sm5q913zqvsleiyciwwl7ai5h42mv0oi2c6t3kc5r2695c5y11oyejjos2tmz62hc9m9r8kxh8oodct8khzqfu655g3136e0dvg5fmm8gv7jqlqxintcq8i7qoe4smyc24h0ho4r1qr0o69keryn5vrav2qb933aamnis1tlv6nyur4o3any302mfr3lah23t3e5hmz5xbr2y6gp71nw == \e\w\5\6\x\b\b\n\s\t\6\g\t\r\w\2\w\d\7\6\3\3\g\e\c\6\s\o\y\t\1\2\f\n\q\t\7\5\8\c\w\2\s\j\u\g\b\f\6\u\b\x\u\g\f\k\4\7\1\2\k\u\1\j\o\f\m\4\4\c\f\7\o\u\q\k\a\x\q\i\z\7\h\d\i\h\w\j\i\3\1\o\l\y\p\u\e\p\z\s\4\9\4\j\f\q\c\r\q\6\2\5\l\5\l\1\k\e\e\1\j\2\k\b\c\7\v\u\t\3\2\5\q\c\y\p\x\i\8\t\p\5\a\6\r\r\2\c\e\a\d\a\c\m\o\d\j\t\9\t\t\g\x\6\p\0\u\c\d\x\h\6\t\7\i\p\t\2\7\5\h\t\d\b\5\i\d\4\c\r\h\j\m\v\x\o\z\z\8\w\q\b\e\9\0\t\w\o\7\t\r\g\n\q\0\l\h\o\5\h\k\s\2\b\o\g\0\h\p\t\l\o\k\6\d\y\e\r\u\1\6\d\2\2\k\o\0\7\c\j\6\k\6\j\k\l\8\g\t\2\f\s\0\c\2\5\5\y\0\k\u\x\0\j\m\1\f\c\8\2\6\o\x\u\w\9\9\u\8\i\u\s\l\u\d\f\0\k\3\3\s\m\5\q\9\1\3\z\q\v\s\l\e\i\y\c\i\w\w\l\7\a\i\5\h\4\2\m\v\0\o\i\2\c\6\t\3\k\c\5\r\2\6\9\5\c\5\y\1\1\o\y\e\j\j\o\s\2\t\m\z\6\2\h\c\9\m\9\r\8\k\x\h\8\o\o\d\c\t\8\k\h\z\q\f\u\6\5\5\g\3\1\3\6\e\0\d\v\g\5\f\m\m\8\g\v\7\j\q\l\q\x\i\n\t\c\q\8\i\7\q\o\e\4\s\m\y\c\2\4\h\0\h\o\4\r\1\q\r\0\o\6\9\k\e\r\y\n\5\v\r\a\v\2\q\b\9\3\3\a\a\m\n\i\s\1\t\l\v\6\n\y\u\r\4\o\3\a\n\y\3\0\2\m\f\r\3\l\a\h\2\3\t\3\e\5\h\m\z\5\x\b\r\2\y\6\g\p\7\1\n\w ]] 00:06:36.090 00:06:36.090 real 0m1.327s 00:06:36.090 user 0m0.655s 00:06:36.090 sys 0m0.329s 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 ************************************ 00:06:36.090 END TEST dd_flag_nofollow_forced_aio 00:06:36.090 ************************************ 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 ************************************ 00:06:36.090 START TEST dd_flag_noatime_forced_aio 00:06:36.090 ************************************ 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733866416 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733866416 00:06:36.090 21:33:36 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:37.464 21:33:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.464 [2024-12-10 21:33:37.932306] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:37.464 [2024-12-10 21:33:37.932478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60580 ] 00:06:37.464 [2024-12-10 21:33:38.082752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.464 [2024-12-10 21:33:38.131648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.464 [2024-12-10 21:33:38.167039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.464  [2024-12-10T21:33:38.506Z] Copying: 512/512 [B] (average 500 kBps) 00:06:37.723 00:06:37.723 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.723 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733866416 )) 00:06:37.723 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.723 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733866416 )) 00:06:37.723 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.723 [2024-12-10 21:33:38.409432] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:37.723 [2024-12-10 21:33:38.409546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60591 ] 00:06:37.981 [2024-12-10 21:33:38.555159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.981 [2024-12-10 21:33:38.590207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.981 [2024-12-10 21:33:38.621173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.981  [2024-12-10T21:33:39.022Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.239 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733866418 )) 00:06:38.239 00:06:38.239 real 0m1.949s 00:06:38.239 user 0m0.498s 00:06:38.239 sys 0m0.204s 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.239 ************************************ 00:06:38.239 END TEST dd_flag_noatime_forced_aio 00:06:38.239 ************************************ 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:38.239 ************************************ 00:06:38.239 START TEST dd_flags_misc_forced_aio 00:06:38.239 ************************************ 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.239 21:33:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:38.239 [2024-12-10 21:33:38.912821] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:38.239 [2024-12-10 21:33:38.912947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60618 ] 00:06:38.498 [2024-12-10 21:33:39.104035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.498 [2024-12-10 21:33:39.154246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.498 [2024-12-10 21:33:39.197199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.498  [2024-12-10T21:33:39.539Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.756 00:06:38.756 21:33:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u0a6zwemhh8pt3dxqaufk2b1fe11k664hd4e34d1afzof8rq0t37gm4u32cnbe1xu91f9v0x9dxo55tc2hfpejq4mi4vg933ukc3gipkl0lqdpc0zlsf7mom6kbmf01m3ww8y1xbwvyeg83is2czmhdaxixp8r9xmhj1xr7mocn3urgvvzju8ychgpqjrdr3k12ku6kx72wx8wm75da6mrfnmzh2xhn0nkpvygfd0dt1galvvkaf97obw8w3bjyczvfjoz6d4hcusw9c7kh21wq8346dcpmm46ul5ly67fe3j2zhipna94qeo58fju83399umdm2bc99wi9mqx8rhcj3g9llkzcaxqut0suktde8qxy81yczn4y0laitr2upxxjwwcmisyy1zkff1isrshq06xvc4j0dosfqjjwia8rmjo4ouim87qzsb7lgq62t0aqixlyh2201739nd4jeez1a1fo4n73egiknhp21hy01x0kdhjg0qd22sr5wfbl6 == \u\0\a\6\z\w\e\m\h\h\8\p\t\3\d\x\q\a\u\f\k\2\b\1\f\e\1\1\k\6\6\4\h\d\4\e\3\4\d\1\a\f\z\o\f\8\r\q\0\t\3\7\g\m\4\u\3\2\c\n\b\e\1\x\u\9\1\f\9\v\0\x\9\d\x\o\5\5\t\c\2\h\f\p\e\j\q\4\m\i\4\v\g\9\3\3\u\k\c\3\g\i\p\k\l\0\l\q\d\p\c\0\z\l\s\f\7\m\o\m\6\k\b\m\f\0\1\m\3\w\w\8\y\1\x\b\w\v\y\e\g\8\3\i\s\2\c\z\m\h\d\a\x\i\x\p\8\r\9\x\m\h\j\1\x\r\7\m\o\c\n\3\u\r\g\v\v\z\j\u\8\y\c\h\g\p\q\j\r\d\r\3\k\1\2\k\u\6\k\x\7\2\w\x\8\w\m\7\5\d\a\6\m\r\f\n\m\z\h\2\x\h\n\0\n\k\p\v\y\g\f\d\0\d\t\1\g\a\l\v\v\k\a\f\9\7\o\b\w\8\w\3\b\j\y\c\z\v\f\j\o\z\6\d\4\h\c\u\s\w\9\c\7\k\h\2\1\w\q\8\3\4\6\d\c\p\m\m\4\6\u\l\5\l\y\6\7\f\e\3\j\2\z\h\i\p\n\a\9\4\q\e\o\5\8\f\j\u\8\3\3\9\9\u\m\d\m\2\b\c\9\9\w\i\9\m\q\x\8\r\h\c\j\3\g\9\l\l\k\z\c\a\x\q\u\t\0\s\u\k\t\d\e\8\q\x\y\8\1\y\c\z\n\4\y\0\l\a\i\t\r\2\u\p\x\x\j\w\w\c\m\i\s\y\y\1\z\k\f\f\1\i\s\r\s\h\q\0\6\x\v\c\4\j\0\d\o\s\f\q\j\j\w\i\a\8\r\m\j\o\4\o\u\i\m\8\7\q\z\s\b\7\l\g\q\6\2\t\0\a\q\i\x\l\y\h\2\2\0\1\7\3\9\n\d\4\j\e\e\z\1\a\1\f\o\4\n\7\3\e\g\i\k\n\h\p\2\1\h\y\0\1\x\0\k\d\h\j\g\0\q\d\2\2\s\r\5\w\f\b\l\6 ]] 00:06:38.756 21:33:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.756 21:33:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:38.756 [2024-12-10 21:33:39.438798] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:38.756 [2024-12-10 21:33:39.438930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:06:39.014 [2024-12-10 21:33:39.585230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.014 [2024-12-10 21:33:39.619536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.014 [2024-12-10 21:33:39.648930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.014  [2024-12-10T21:33:40.054Z] Copying: 512/512 [B] (average 500 kBps) 00:06:39.271 00:06:39.272 21:33:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u0a6zwemhh8pt3dxqaufk2b1fe11k664hd4e34d1afzof8rq0t37gm4u32cnbe1xu91f9v0x9dxo55tc2hfpejq4mi4vg933ukc3gipkl0lqdpc0zlsf7mom6kbmf01m3ww8y1xbwvyeg83is2czmhdaxixp8r9xmhj1xr7mocn3urgvvzju8ychgpqjrdr3k12ku6kx72wx8wm75da6mrfnmzh2xhn0nkpvygfd0dt1galvvkaf97obw8w3bjyczvfjoz6d4hcusw9c7kh21wq8346dcpmm46ul5ly67fe3j2zhipna94qeo58fju83399umdm2bc99wi9mqx8rhcj3g9llkzcaxqut0suktde8qxy81yczn4y0laitr2upxxjwwcmisyy1zkff1isrshq06xvc4j0dosfqjjwia8rmjo4ouim87qzsb7lgq62t0aqixlyh2201739nd4jeez1a1fo4n73egiknhp21hy01x0kdhjg0qd22sr5wfbl6 == \u\0\a\6\z\w\e\m\h\h\8\p\t\3\d\x\q\a\u\f\k\2\b\1\f\e\1\1\k\6\6\4\h\d\4\e\3\4\d\1\a\f\z\o\f\8\r\q\0\t\3\7\g\m\4\u\3\2\c\n\b\e\1\x\u\9\1\f\9\v\0\x\9\d\x\o\5\5\t\c\2\h\f\p\e\j\q\4\m\i\4\v\g\9\3\3\u\k\c\3\g\i\p\k\l\0\l\q\d\p\c\0\z\l\s\f\7\m\o\m\6\k\b\m\f\0\1\m\3\w\w\8\y\1\x\b\w\v\y\e\g\8\3\i\s\2\c\z\m\h\d\a\x\i\x\p\8\r\9\x\m\h\j\1\x\r\7\m\o\c\n\3\u\r\g\v\v\z\j\u\8\y\c\h\g\p\q\j\r\d\r\3\k\1\2\k\u\6\k\x\7\2\w\x\8\w\m\7\5\d\a\6\m\r\f\n\m\z\h\2\x\h\n\0\n\k\p\v\y\g\f\d\0\d\t\1\g\a\l\v\v\k\a\f\9\7\o\b\w\8\w\3\b\j\y\c\z\v\f\j\o\z\6\d\4\h\c\u\s\w\9\c\7\k\h\2\1\w\q\8\3\4\6\d\c\p\m\m\4\6\u\l\5\l\y\6\7\f\e\3\j\2\z\h\i\p\n\a\9\4\q\e\o\5\8\f\j\u\8\3\3\9\9\u\m\d\m\2\b\c\9\9\w\i\9\m\q\x\8\r\h\c\j\3\g\9\l\l\k\z\c\a\x\q\u\t\0\s\u\k\t\d\e\8\q\x\y\8\1\y\c\z\n\4\y\0\l\a\i\t\r\2\u\p\x\x\j\w\w\c\m\i\s\y\y\1\z\k\f\f\1\i\s\r\s\h\q\0\6\x\v\c\4\j\0\d\o\s\f\q\j\j\w\i\a\8\r\m\j\o\4\o\u\i\m\8\7\q\z\s\b\7\l\g\q\6\2\t\0\a\q\i\x\l\y\h\2\2\0\1\7\3\9\n\d\4\j\e\e\z\1\a\1\f\o\4\n\7\3\e\g\i\k\n\h\p\2\1\h\y\0\1\x\0\k\d\h\j\g\0\q\d\2\2\s\r\5\w\f\b\l\6 ]] 00:06:39.272 21:33:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.272 21:33:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:39.272 [2024-12-10 21:33:39.873679] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:39.272 [2024-12-10 21:33:39.873775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60633 ] 00:06:39.272 [2024-12-10 21:33:40.017501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.272 [2024-12-10 21:33:40.051863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.556 [2024-12-10 21:33:40.081828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.556  [2024-12-10T21:33:40.339Z] Copying: 512/512 [B] (average 166 kBps) 00:06:39.556 00:06:39.556 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u0a6zwemhh8pt3dxqaufk2b1fe11k664hd4e34d1afzof8rq0t37gm4u32cnbe1xu91f9v0x9dxo55tc2hfpejq4mi4vg933ukc3gipkl0lqdpc0zlsf7mom6kbmf01m3ww8y1xbwvyeg83is2czmhdaxixp8r9xmhj1xr7mocn3urgvvzju8ychgpqjrdr3k12ku6kx72wx8wm75da6mrfnmzh2xhn0nkpvygfd0dt1galvvkaf97obw8w3bjyczvfjoz6d4hcusw9c7kh21wq8346dcpmm46ul5ly67fe3j2zhipna94qeo58fju83399umdm2bc99wi9mqx8rhcj3g9llkzcaxqut0suktde8qxy81yczn4y0laitr2upxxjwwcmisyy1zkff1isrshq06xvc4j0dosfqjjwia8rmjo4ouim87qzsb7lgq62t0aqixlyh2201739nd4jeez1a1fo4n73egiknhp21hy01x0kdhjg0qd22sr5wfbl6 == \u\0\a\6\z\w\e\m\h\h\8\p\t\3\d\x\q\a\u\f\k\2\b\1\f\e\1\1\k\6\6\4\h\d\4\e\3\4\d\1\a\f\z\o\f\8\r\q\0\t\3\7\g\m\4\u\3\2\c\n\b\e\1\x\u\9\1\f\9\v\0\x\9\d\x\o\5\5\t\c\2\h\f\p\e\j\q\4\m\i\4\v\g\9\3\3\u\k\c\3\g\i\p\k\l\0\l\q\d\p\c\0\z\l\s\f\7\m\o\m\6\k\b\m\f\0\1\m\3\w\w\8\y\1\x\b\w\v\y\e\g\8\3\i\s\2\c\z\m\h\d\a\x\i\x\p\8\r\9\x\m\h\j\1\x\r\7\m\o\c\n\3\u\r\g\v\v\z\j\u\8\y\c\h\g\p\q\j\r\d\r\3\k\1\2\k\u\6\k\x\7\2\w\x\8\w\m\7\5\d\a\6\m\r\f\n\m\z\h\2\x\h\n\0\n\k\p\v\y\g\f\d\0\d\t\1\g\a\l\v\v\k\a\f\9\7\o\b\w\8\w\3\b\j\y\c\z\v\f\j\o\z\6\d\4\h\c\u\s\w\9\c\7\k\h\2\1\w\q\8\3\4\6\d\c\p\m\m\4\6\u\l\5\l\y\6\7\f\e\3\j\2\z\h\i\p\n\a\9\4\q\e\o\5\8\f\j\u\8\3\3\9\9\u\m\d\m\2\b\c\9\9\w\i\9\m\q\x\8\r\h\c\j\3\g\9\l\l\k\z\c\a\x\q\u\t\0\s\u\k\t\d\e\8\q\x\y\8\1\y\c\z\n\4\y\0\l\a\i\t\r\2\u\p\x\x\j\w\w\c\m\i\s\y\y\1\z\k\f\f\1\i\s\r\s\h\q\0\6\x\v\c\4\j\0\d\o\s\f\q\j\j\w\i\a\8\r\m\j\o\4\o\u\i\m\8\7\q\z\s\b\7\l\g\q\6\2\t\0\a\q\i\x\l\y\h\2\2\0\1\7\3\9\n\d\4\j\e\e\z\1\a\1\f\o\4\n\7\3\e\g\i\k\n\h\p\2\1\h\y\0\1\x\0\k\d\h\j\g\0\q\d\2\2\s\r\5\w\f\b\l\6 ]] 00:06:39.556 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.556 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:39.815 [2024-12-10 21:33:40.355038] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:39.815 [2024-12-10 21:33:40.355170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:06:39.815 [2024-12-10 21:33:40.505872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.815 [2024-12-10 21:33:40.554126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.815 [2024-12-10 21:33:40.591264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.073  [2024-12-10T21:33:40.856Z] Copying: 512/512 [B] (average 500 kBps) 00:06:40.073 00:06:40.073 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ u0a6zwemhh8pt3dxqaufk2b1fe11k664hd4e34d1afzof8rq0t37gm4u32cnbe1xu91f9v0x9dxo55tc2hfpejq4mi4vg933ukc3gipkl0lqdpc0zlsf7mom6kbmf01m3ww8y1xbwvyeg83is2czmhdaxixp8r9xmhj1xr7mocn3urgvvzju8ychgpqjrdr3k12ku6kx72wx8wm75da6mrfnmzh2xhn0nkpvygfd0dt1galvvkaf97obw8w3bjyczvfjoz6d4hcusw9c7kh21wq8346dcpmm46ul5ly67fe3j2zhipna94qeo58fju83399umdm2bc99wi9mqx8rhcj3g9llkzcaxqut0suktde8qxy81yczn4y0laitr2upxxjwwcmisyy1zkff1isrshq06xvc4j0dosfqjjwia8rmjo4ouim87qzsb7lgq62t0aqixlyh2201739nd4jeez1a1fo4n73egiknhp21hy01x0kdhjg0qd22sr5wfbl6 == \u\0\a\6\z\w\e\m\h\h\8\p\t\3\d\x\q\a\u\f\k\2\b\1\f\e\1\1\k\6\6\4\h\d\4\e\3\4\d\1\a\f\z\o\f\8\r\q\0\t\3\7\g\m\4\u\3\2\c\n\b\e\1\x\u\9\1\f\9\v\0\x\9\d\x\o\5\5\t\c\2\h\f\p\e\j\q\4\m\i\4\v\g\9\3\3\u\k\c\3\g\i\p\k\l\0\l\q\d\p\c\0\z\l\s\f\7\m\o\m\6\k\b\m\f\0\1\m\3\w\w\8\y\1\x\b\w\v\y\e\g\8\3\i\s\2\c\z\m\h\d\a\x\i\x\p\8\r\9\x\m\h\j\1\x\r\7\m\o\c\n\3\u\r\g\v\v\z\j\u\8\y\c\h\g\p\q\j\r\d\r\3\k\1\2\k\u\6\k\x\7\2\w\x\8\w\m\7\5\d\a\6\m\r\f\n\m\z\h\2\x\h\n\0\n\k\p\v\y\g\f\d\0\d\t\1\g\a\l\v\v\k\a\f\9\7\o\b\w\8\w\3\b\j\y\c\z\v\f\j\o\z\6\d\4\h\c\u\s\w\9\c\7\k\h\2\1\w\q\8\3\4\6\d\c\p\m\m\4\6\u\l\5\l\y\6\7\f\e\3\j\2\z\h\i\p\n\a\9\4\q\e\o\5\8\f\j\u\8\3\3\9\9\u\m\d\m\2\b\c\9\9\w\i\9\m\q\x\8\r\h\c\j\3\g\9\l\l\k\z\c\a\x\q\u\t\0\s\u\k\t\d\e\8\q\x\y\8\1\y\c\z\n\4\y\0\l\a\i\t\r\2\u\p\x\x\j\w\w\c\m\i\s\y\y\1\z\k\f\f\1\i\s\r\s\h\q\0\6\x\v\c\4\j\0\d\o\s\f\q\j\j\w\i\a\8\r\m\j\o\4\o\u\i\m\8\7\q\z\s\b\7\l\g\q\6\2\t\0\a\q\i\x\l\y\h\2\2\0\1\7\3\9\n\d\4\j\e\e\z\1\a\1\f\o\4\n\7\3\e\g\i\k\n\h\p\2\1\h\y\0\1\x\0\k\d\h\j\g\0\q\d\2\2\s\r\5\w\f\b\l\6 ]] 00:06:40.073 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:40.073 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:40.073 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:40.073 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.073 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.073 21:33:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:40.332 [2024-12-10 21:33:40.866896] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:40.332 [2024-12-10 21:33:40.867034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60648 ] 00:06:40.332 [2024-12-10 21:33:41.017622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.332 [2024-12-10 21:33:41.058099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.332 [2024-12-10 21:33:41.087673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.332  [2024-12-10T21:33:41.373Z] Copying: 512/512 [B] (average 500 kBps) 00:06:40.590 00:06:40.590 21:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2pffumcziac9dgw02nhbgb16ch8ey4jpq3jsjjsh2d12pfkynca9tn1ma2m7t8mkcc8d8uax4g5tdzawbp6h82lcgic7akxjbf51d9x0nwqhftfym1o9wxxhf9pky0hnal7lq1kr5zwb49dznyaw00roy9lk3bhwyya68pikv7hn99jvjqbjjgc5rdxg3tqan7n5wp191944hrclgtaskcdnixg40m5ts1gfwga1r1rqz6dqwhs7k0mc4qpdrgyo4acbat1yx614n4ywijlt1bfnxa1aituggheukrpn468ylwhx67dckp6u1iduxs0jap7fkbeq10cu9yie1gbfhqj9c2pgy9tc70ifdzlilz5m67kv151yv8snueadikvwaboku9zj4fhtvatb7g4bu8j4a2enshi6nprmcen3r0c5ossynicf4yxia6wt5doeynyhxv99q7sawqgruj2g65nck52enxzty64z9042qy5ft7npikd22f4xhsknmqon == \2\p\f\f\u\m\c\z\i\a\c\9\d\g\w\0\2\n\h\b\g\b\1\6\c\h\8\e\y\4\j\p\q\3\j\s\j\j\s\h\2\d\1\2\p\f\k\y\n\c\a\9\t\n\1\m\a\2\m\7\t\8\m\k\c\c\8\d\8\u\a\x\4\g\5\t\d\z\a\w\b\p\6\h\8\2\l\c\g\i\c\7\a\k\x\j\b\f\5\1\d\9\x\0\n\w\q\h\f\t\f\y\m\1\o\9\w\x\x\h\f\9\p\k\y\0\h\n\a\l\7\l\q\1\k\r\5\z\w\b\4\9\d\z\n\y\a\w\0\0\r\o\y\9\l\k\3\b\h\w\y\y\a\6\8\p\i\k\v\7\h\n\9\9\j\v\j\q\b\j\j\g\c\5\r\d\x\g\3\t\q\a\n\7\n\5\w\p\1\9\1\9\4\4\h\r\c\l\g\t\a\s\k\c\d\n\i\x\g\4\0\m\5\t\s\1\g\f\w\g\a\1\r\1\r\q\z\6\d\q\w\h\s\7\k\0\m\c\4\q\p\d\r\g\y\o\4\a\c\b\a\t\1\y\x\6\1\4\n\4\y\w\i\j\l\t\1\b\f\n\x\a\1\a\i\t\u\g\g\h\e\u\k\r\p\n\4\6\8\y\l\w\h\x\6\7\d\c\k\p\6\u\1\i\d\u\x\s\0\j\a\p\7\f\k\b\e\q\1\0\c\u\9\y\i\e\1\g\b\f\h\q\j\9\c\2\p\g\y\9\t\c\7\0\i\f\d\z\l\i\l\z\5\m\6\7\k\v\1\5\1\y\v\8\s\n\u\e\a\d\i\k\v\w\a\b\o\k\u\9\z\j\4\f\h\t\v\a\t\b\7\g\4\b\u\8\j\4\a\2\e\n\s\h\i\6\n\p\r\m\c\e\n\3\r\0\c\5\o\s\s\y\n\i\c\f\4\y\x\i\a\6\w\t\5\d\o\e\y\n\y\h\x\v\9\9\q\7\s\a\w\q\g\r\u\j\2\g\6\5\n\c\k\5\2\e\n\x\z\t\y\6\4\z\9\0\4\2\q\y\5\f\t\7\n\p\i\k\d\2\2\f\4\x\h\s\k\n\m\q\o\n ]] 00:06:40.590 21:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.590 21:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:40.590 [2024-12-10 21:33:41.307222] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:40.590 [2024-12-10 21:33:41.307331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60655 ] 00:06:40.848 [2024-12-10 21:33:41.449471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.848 [2024-12-10 21:33:41.491235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.848 [2024-12-10 21:33:41.528880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.848  [2024-12-10T21:33:41.889Z] Copying: 512/512 [B] (average 500 kBps) 00:06:41.106 00:06:41.106 21:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2pffumcziac9dgw02nhbgb16ch8ey4jpq3jsjjsh2d12pfkynca9tn1ma2m7t8mkcc8d8uax4g5tdzawbp6h82lcgic7akxjbf51d9x0nwqhftfym1o9wxxhf9pky0hnal7lq1kr5zwb49dznyaw00roy9lk3bhwyya68pikv7hn99jvjqbjjgc5rdxg3tqan7n5wp191944hrclgtaskcdnixg40m5ts1gfwga1r1rqz6dqwhs7k0mc4qpdrgyo4acbat1yx614n4ywijlt1bfnxa1aituggheukrpn468ylwhx67dckp6u1iduxs0jap7fkbeq10cu9yie1gbfhqj9c2pgy9tc70ifdzlilz5m67kv151yv8snueadikvwaboku9zj4fhtvatb7g4bu8j4a2enshi6nprmcen3r0c5ossynicf4yxia6wt5doeynyhxv99q7sawqgruj2g65nck52enxzty64z9042qy5ft7npikd22f4xhsknmqon == \2\p\f\f\u\m\c\z\i\a\c\9\d\g\w\0\2\n\h\b\g\b\1\6\c\h\8\e\y\4\j\p\q\3\j\s\j\j\s\h\2\d\1\2\p\f\k\y\n\c\a\9\t\n\1\m\a\2\m\7\t\8\m\k\c\c\8\d\8\u\a\x\4\g\5\t\d\z\a\w\b\p\6\h\8\2\l\c\g\i\c\7\a\k\x\j\b\f\5\1\d\9\x\0\n\w\q\h\f\t\f\y\m\1\o\9\w\x\x\h\f\9\p\k\y\0\h\n\a\l\7\l\q\1\k\r\5\z\w\b\4\9\d\z\n\y\a\w\0\0\r\o\y\9\l\k\3\b\h\w\y\y\a\6\8\p\i\k\v\7\h\n\9\9\j\v\j\q\b\j\j\g\c\5\r\d\x\g\3\t\q\a\n\7\n\5\w\p\1\9\1\9\4\4\h\r\c\l\g\t\a\s\k\c\d\n\i\x\g\4\0\m\5\t\s\1\g\f\w\g\a\1\r\1\r\q\z\6\d\q\w\h\s\7\k\0\m\c\4\q\p\d\r\g\y\o\4\a\c\b\a\t\1\y\x\6\1\4\n\4\y\w\i\j\l\t\1\b\f\n\x\a\1\a\i\t\u\g\g\h\e\u\k\r\p\n\4\6\8\y\l\w\h\x\6\7\d\c\k\p\6\u\1\i\d\u\x\s\0\j\a\p\7\f\k\b\e\q\1\0\c\u\9\y\i\e\1\g\b\f\h\q\j\9\c\2\p\g\y\9\t\c\7\0\i\f\d\z\l\i\l\z\5\m\6\7\k\v\1\5\1\y\v\8\s\n\u\e\a\d\i\k\v\w\a\b\o\k\u\9\z\j\4\f\h\t\v\a\t\b\7\g\4\b\u\8\j\4\a\2\e\n\s\h\i\6\n\p\r\m\c\e\n\3\r\0\c\5\o\s\s\y\n\i\c\f\4\y\x\i\a\6\w\t\5\d\o\e\y\n\y\h\x\v\9\9\q\7\s\a\w\q\g\r\u\j\2\g\6\5\n\c\k\5\2\e\n\x\z\t\y\6\4\z\9\0\4\2\q\y\5\f\t\7\n\p\i\k\d\2\2\f\4\x\h\s\k\n\m\q\o\n ]] 00:06:41.106 21:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.106 21:33:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:41.106 [2024-12-10 21:33:41.789989] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:41.106 [2024-12-10 21:33:41.790135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:06:41.364 [2024-12-10 21:33:41.933401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.364 [2024-12-10 21:33:41.981737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.364 [2024-12-10 21:33:42.016850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.364  [2024-12-10T21:33:42.405Z] Copying: 512/512 [B] (average 250 kBps) 00:06:41.622 00:06:41.622 21:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2pffumcziac9dgw02nhbgb16ch8ey4jpq3jsjjsh2d12pfkynca9tn1ma2m7t8mkcc8d8uax4g5tdzawbp6h82lcgic7akxjbf51d9x0nwqhftfym1o9wxxhf9pky0hnal7lq1kr5zwb49dznyaw00roy9lk3bhwyya68pikv7hn99jvjqbjjgc5rdxg3tqan7n5wp191944hrclgtaskcdnixg40m5ts1gfwga1r1rqz6dqwhs7k0mc4qpdrgyo4acbat1yx614n4ywijlt1bfnxa1aituggheukrpn468ylwhx67dckp6u1iduxs0jap7fkbeq10cu9yie1gbfhqj9c2pgy9tc70ifdzlilz5m67kv151yv8snueadikvwaboku9zj4fhtvatb7g4bu8j4a2enshi6nprmcen3r0c5ossynicf4yxia6wt5doeynyhxv99q7sawqgruj2g65nck52enxzty64z9042qy5ft7npikd22f4xhsknmqon == \2\p\f\f\u\m\c\z\i\a\c\9\d\g\w\0\2\n\h\b\g\b\1\6\c\h\8\e\y\4\j\p\q\3\j\s\j\j\s\h\2\d\1\2\p\f\k\y\n\c\a\9\t\n\1\m\a\2\m\7\t\8\m\k\c\c\8\d\8\u\a\x\4\g\5\t\d\z\a\w\b\p\6\h\8\2\l\c\g\i\c\7\a\k\x\j\b\f\5\1\d\9\x\0\n\w\q\h\f\t\f\y\m\1\o\9\w\x\x\h\f\9\p\k\y\0\h\n\a\l\7\l\q\1\k\r\5\z\w\b\4\9\d\z\n\y\a\w\0\0\r\o\y\9\l\k\3\b\h\w\y\y\a\6\8\p\i\k\v\7\h\n\9\9\j\v\j\q\b\j\j\g\c\5\r\d\x\g\3\t\q\a\n\7\n\5\w\p\1\9\1\9\4\4\h\r\c\l\g\t\a\s\k\c\d\n\i\x\g\4\0\m\5\t\s\1\g\f\w\g\a\1\r\1\r\q\z\6\d\q\w\h\s\7\k\0\m\c\4\q\p\d\r\g\y\o\4\a\c\b\a\t\1\y\x\6\1\4\n\4\y\w\i\j\l\t\1\b\f\n\x\a\1\a\i\t\u\g\g\h\e\u\k\r\p\n\4\6\8\y\l\w\h\x\6\7\d\c\k\p\6\u\1\i\d\u\x\s\0\j\a\p\7\f\k\b\e\q\1\0\c\u\9\y\i\e\1\g\b\f\h\q\j\9\c\2\p\g\y\9\t\c\7\0\i\f\d\z\l\i\l\z\5\m\6\7\k\v\1\5\1\y\v\8\s\n\u\e\a\d\i\k\v\w\a\b\o\k\u\9\z\j\4\f\h\t\v\a\t\b\7\g\4\b\u\8\j\4\a\2\e\n\s\h\i\6\n\p\r\m\c\e\n\3\r\0\c\5\o\s\s\y\n\i\c\f\4\y\x\i\a\6\w\t\5\d\o\e\y\n\y\h\x\v\9\9\q\7\s\a\w\q\g\r\u\j\2\g\6\5\n\c\k\5\2\e\n\x\z\t\y\6\4\z\9\0\4\2\q\y\5\f\t\7\n\p\i\k\d\2\2\f\4\x\h\s\k\n\m\q\o\n ]] 00:06:41.622 21:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.622 21:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:41.622 [2024-12-10 21:33:42.236716] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:41.622 [2024-12-10 21:33:42.236829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60665 ] 00:06:41.622 [2024-12-10 21:33:42.382942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.881 [2024-12-10 21:33:42.417587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.881 [2024-12-10 21:33:42.450862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.881  [2024-12-10T21:33:42.664Z] Copying: 512/512 [B] (average 500 kBps) 00:06:41.881 00:06:41.881 21:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2pffumcziac9dgw02nhbgb16ch8ey4jpq3jsjjsh2d12pfkynca9tn1ma2m7t8mkcc8d8uax4g5tdzawbp6h82lcgic7akxjbf51d9x0nwqhftfym1o9wxxhf9pky0hnal7lq1kr5zwb49dznyaw00roy9lk3bhwyya68pikv7hn99jvjqbjjgc5rdxg3tqan7n5wp191944hrclgtaskcdnixg40m5ts1gfwga1r1rqz6dqwhs7k0mc4qpdrgyo4acbat1yx614n4ywijlt1bfnxa1aituggheukrpn468ylwhx67dckp6u1iduxs0jap7fkbeq10cu9yie1gbfhqj9c2pgy9tc70ifdzlilz5m67kv151yv8snueadikvwaboku9zj4fhtvatb7g4bu8j4a2enshi6nprmcen3r0c5ossynicf4yxia6wt5doeynyhxv99q7sawqgruj2g65nck52enxzty64z9042qy5ft7npikd22f4xhsknmqon == \2\p\f\f\u\m\c\z\i\a\c\9\d\g\w\0\2\n\h\b\g\b\1\6\c\h\8\e\y\4\j\p\q\3\j\s\j\j\s\h\2\d\1\2\p\f\k\y\n\c\a\9\t\n\1\m\a\2\m\7\t\8\m\k\c\c\8\d\8\u\a\x\4\g\5\t\d\z\a\w\b\p\6\h\8\2\l\c\g\i\c\7\a\k\x\j\b\f\5\1\d\9\x\0\n\w\q\h\f\t\f\y\m\1\o\9\w\x\x\h\f\9\p\k\y\0\h\n\a\l\7\l\q\1\k\r\5\z\w\b\4\9\d\z\n\y\a\w\0\0\r\o\y\9\l\k\3\b\h\w\y\y\a\6\8\p\i\k\v\7\h\n\9\9\j\v\j\q\b\j\j\g\c\5\r\d\x\g\3\t\q\a\n\7\n\5\w\p\1\9\1\9\4\4\h\r\c\l\g\t\a\s\k\c\d\n\i\x\g\4\0\m\5\t\s\1\g\f\w\g\a\1\r\1\r\q\z\6\d\q\w\h\s\7\k\0\m\c\4\q\p\d\r\g\y\o\4\a\c\b\a\t\1\y\x\6\1\4\n\4\y\w\i\j\l\t\1\b\f\n\x\a\1\a\i\t\u\g\g\h\e\u\k\r\p\n\4\6\8\y\l\w\h\x\6\7\d\c\k\p\6\u\1\i\d\u\x\s\0\j\a\p\7\f\k\b\e\q\1\0\c\u\9\y\i\e\1\g\b\f\h\q\j\9\c\2\p\g\y\9\t\c\7\0\i\f\d\z\l\i\l\z\5\m\6\7\k\v\1\5\1\y\v\8\s\n\u\e\a\d\i\k\v\w\a\b\o\k\u\9\z\j\4\f\h\t\v\a\t\b\7\g\4\b\u\8\j\4\a\2\e\n\s\h\i\6\n\p\r\m\c\e\n\3\r\0\c\5\o\s\s\y\n\i\c\f\4\y\x\i\a\6\w\t\5\d\o\e\y\n\y\h\x\v\9\9\q\7\s\a\w\q\g\r\u\j\2\g\6\5\n\c\k\5\2\e\n\x\z\t\y\6\4\z\9\0\4\2\q\y\5\f\t\7\n\p\i\k\d\2\2\f\4\x\h\s\k\n\m\q\o\n ]] 00:06:41.881 00:06:41.881 real 0m3.803s 00:06:41.881 user 0m1.968s 00:06:41.881 sys 0m0.838s 00:06:41.881 21:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.881 21:33:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:41.881 ************************************ 00:06:41.881 END TEST dd_flags_misc_forced_aio 00:06:41.881 ************************************ 00:06:42.140 21:33:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:42.140 21:33:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:42.140 21:33:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:42.140 00:06:42.140 real 0m17.015s 00:06:42.140 user 0m7.753s 00:06:42.140 sys 0m4.606s 00:06:42.140 21:33:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.140 21:33:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:42.140 ************************************ 00:06:42.140 END TEST spdk_dd_posix 00:06:42.140 ************************************ 00:06:42.140 21:33:42 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:42.140 21:33:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.140 21:33:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.140 21:33:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:42.140 ************************************ 00:06:42.140 START TEST spdk_dd_malloc 00:06:42.140 ************************************ 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:42.140 * Looking for test storage... 00:06:42.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.140 --rc genhtml_branch_coverage=1 00:06:42.140 --rc genhtml_function_coverage=1 00:06:42.140 --rc genhtml_legend=1 00:06:42.140 --rc geninfo_all_blocks=1 00:06:42.140 --rc geninfo_unexecuted_blocks=1 00:06:42.140 00:06:42.140 ' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.140 --rc genhtml_branch_coverage=1 00:06:42.140 --rc genhtml_function_coverage=1 00:06:42.140 --rc genhtml_legend=1 00:06:42.140 --rc geninfo_all_blocks=1 00:06:42.140 --rc geninfo_unexecuted_blocks=1 00:06:42.140 00:06:42.140 ' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.140 --rc genhtml_branch_coverage=1 00:06:42.140 --rc genhtml_function_coverage=1 00:06:42.140 --rc genhtml_legend=1 00:06:42.140 --rc geninfo_all_blocks=1 00:06:42.140 --rc geninfo_unexecuted_blocks=1 00:06:42.140 00:06:42.140 ' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.140 --rc genhtml_branch_coverage=1 00:06:42.140 --rc genhtml_function_coverage=1 00:06:42.140 --rc genhtml_legend=1 00:06:42.140 --rc geninfo_all_blocks=1 00:06:42.140 --rc geninfo_unexecuted_blocks=1 00:06:42.140 00:06:42.140 ' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.140 21:33:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:42.399 ************************************ 00:06:42.399 START TEST dd_malloc_copy 00:06:42.399 ************************************ 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:42.399 21:33:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.399 { 00:06:42.399 "subsystems": [ 00:06:42.399 { 00:06:42.399 "subsystem": "bdev", 00:06:42.399 "config": [ 00:06:42.399 { 00:06:42.399 "params": { 00:06:42.399 "block_size": 512, 00:06:42.399 "num_blocks": 1048576, 00:06:42.399 "name": "malloc0" 00:06:42.399 }, 00:06:42.399 "method": "bdev_malloc_create" 00:06:42.399 }, 00:06:42.399 { 00:06:42.399 "params": { 00:06:42.399 "block_size": 512, 00:06:42.399 "num_blocks": 1048576, 00:06:42.399 "name": "malloc1" 00:06:42.399 }, 00:06:42.399 "method": "bdev_malloc_create" 00:06:42.399 }, 00:06:42.399 { 00:06:42.399 "method": "bdev_wait_for_examine" 00:06:42.399 } 00:06:42.399 ] 00:06:42.399 } 00:06:42.399 ] 00:06:42.399 } 00:06:42.399 [2024-12-10 21:33:42.995518] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:42.399 [2024-12-10 21:33:42.995656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60747 ] 00:06:42.399 [2024-12-10 21:33:43.153744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.657 [2024-12-10 21:33:43.188018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.657 [2024-12-10 21:33:43.218800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.030  [2024-12-10T21:33:45.748Z] Copying: 181/512 [MB] (181 MBps) [2024-12-10T21:33:46.683Z] Copying: 348/512 [MB] (166 MBps) [2024-12-10T21:33:46.941Z] Copying: 512/512 [MB] (average 175 MBps) 00:06:46.158 00:06:46.158 21:33:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:46.158 21:33:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:46.158 21:33:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:46.158 21:33:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:46.158 [2024-12-10 21:33:46.753522] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:46.158 [2024-12-10 21:33:46.753655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60800 ] 00:06:46.158 { 00:06:46.158 "subsystems": [ 00:06:46.158 { 00:06:46.158 "subsystem": "bdev", 00:06:46.158 "config": [ 00:06:46.158 { 00:06:46.158 "params": { 00:06:46.158 "block_size": 512, 00:06:46.158 "num_blocks": 1048576, 00:06:46.158 "name": "malloc0" 00:06:46.158 }, 00:06:46.158 "method": "bdev_malloc_create" 00:06:46.158 }, 00:06:46.158 { 00:06:46.158 "params": { 00:06:46.158 "block_size": 512, 00:06:46.158 "num_blocks": 1048576, 00:06:46.158 "name": "malloc1" 00:06:46.158 }, 00:06:46.158 "method": "bdev_malloc_create" 00:06:46.158 }, 00:06:46.158 { 00:06:46.158 "method": "bdev_wait_for_examine" 00:06:46.158 } 00:06:46.158 ] 00:06:46.158 } 00:06:46.158 ] 00:06:46.158 } 00:06:46.158 [2024-12-10 21:33:46.897909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.416 [2024-12-10 21:33:46.945689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.416 [2024-12-10 21:33:46.976864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.791  [2024-12-10T21:33:49.508Z] Copying: 178/512 [MB] (178 MBps) [2024-12-10T21:33:50.442Z] Copying: 350/512 [MB] (172 MBps) [2024-12-10T21:33:50.700Z] Copying: 512/512 [MB] (average 174 MBps) 00:06:49.917 00:06:49.917 00:06:49.917 real 0m7.537s 00:06:49.917 user 0m6.813s 00:06:49.917 sys 0m0.516s 00:06:49.917 21:33:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.917 ************************************ 00:06:49.917 END TEST dd_malloc_copy 00:06:49.917 21:33:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:49.917 ************************************ 00:06:49.917 ************************************ 00:06:49.917 END TEST spdk_dd_malloc 00:06:49.917 ************************************ 00:06:49.917 00:06:49.917 real 0m7.771s 00:06:49.917 user 0m6.959s 00:06:49.917 sys 0m0.608s 00:06:49.917 21:33:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.917 21:33:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:49.917 21:33:50 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:49.917 21:33:50 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:49.917 21:33:50 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.917 21:33:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.917 ************************************ 00:06:49.917 START TEST spdk_dd_bdev_to_bdev 00:06:49.917 ************************************ 00:06:49.917 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:49.917 * Looking for test storage... 00:06:49.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:49.918 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:49.918 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:06:49.918 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:50.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.176 --rc genhtml_branch_coverage=1 00:06:50.176 --rc genhtml_function_coverage=1 00:06:50.176 --rc genhtml_legend=1 00:06:50.176 --rc geninfo_all_blocks=1 00:06:50.176 --rc geninfo_unexecuted_blocks=1 00:06:50.176 00:06:50.176 ' 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:50.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.176 --rc genhtml_branch_coverage=1 00:06:50.176 --rc genhtml_function_coverage=1 00:06:50.176 --rc genhtml_legend=1 00:06:50.176 --rc geninfo_all_blocks=1 00:06:50.176 --rc geninfo_unexecuted_blocks=1 00:06:50.176 00:06:50.176 ' 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:50.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.176 --rc genhtml_branch_coverage=1 00:06:50.176 --rc genhtml_function_coverage=1 00:06:50.176 --rc genhtml_legend=1 00:06:50.176 --rc geninfo_all_blocks=1 00:06:50.176 --rc geninfo_unexecuted_blocks=1 00:06:50.176 00:06:50.176 ' 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:50.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.176 --rc genhtml_branch_coverage=1 00:06:50.176 --rc genhtml_function_coverage=1 00:06:50.176 --rc genhtml_legend=1 00:06:50.176 --rc geninfo_all_blocks=1 00:06:50.176 --rc geninfo_unexecuted_blocks=1 00:06:50.176 00:06:50.176 ' 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.176 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.177 ************************************ 00:06:50.177 START TEST dd_inflate_file 00:06:50.177 ************************************ 00:06:50.177 21:33:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:50.177 [2024-12-10 21:33:50.794974] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:50.177 [2024-12-10 21:33:50.795082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:06:50.177 [2024-12-10 21:33:50.936417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.441 [2024-12-10 21:33:50.971822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.441 [2024-12-10 21:33:51.001497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.441  [2024-12-10T21:33:51.224Z] Copying: 64/64 [MB] (average 1422 MBps) 00:06:50.441 00:06:50.441 00:06:50.441 real 0m0.442s 00:06:50.441 user 0m0.244s 00:06:50.441 sys 0m0.232s 00:06:50.441 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.441 ************************************ 00:06:50.441 END TEST dd_inflate_file 00:06:50.441 ************************************ 00:06:50.441 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:50.700 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:50.700 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:50.700 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:50.700 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:50.701 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:50.701 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:50.701 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.701 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.701 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:50.701 ************************************ 00:06:50.701 START TEST dd_copy_to_out_bdev 00:06:50.701 ************************************ 00:06:50.701 21:33:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:50.701 { 00:06:50.701 "subsystems": [ 00:06:50.701 { 00:06:50.701 "subsystem": "bdev", 00:06:50.701 "config": [ 00:06:50.701 { 00:06:50.701 "params": { 00:06:50.701 "trtype": "pcie", 00:06:50.701 "traddr": "0000:00:10.0", 00:06:50.701 "name": "Nvme0" 00:06:50.701 }, 00:06:50.701 "method": "bdev_nvme_attach_controller" 00:06:50.701 }, 00:06:50.701 { 00:06:50.701 "params": { 00:06:50.701 "trtype": "pcie", 00:06:50.701 "traddr": "0000:00:11.0", 00:06:50.701 "name": "Nvme1" 00:06:50.701 }, 00:06:50.701 "method": "bdev_nvme_attach_controller" 00:06:50.701 }, 00:06:50.701 { 00:06:50.701 "method": "bdev_wait_for_examine" 00:06:50.701 } 00:06:50.701 ] 00:06:50.701 } 00:06:50.701 ] 00:06:50.701 } 00:06:50.701 [2024-12-10 21:33:51.304280] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:50.701 [2024-12-10 21:33:51.304437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60948 ] 00:06:50.701 [2024-12-10 21:33:51.455597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.958 [2024-12-10 21:33:51.498119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.958 [2024-12-10 21:33:51.532768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.983  [2024-12-10T21:33:53.024Z] Copying: 64/64 [MB] (average 65 MBps) 00:06:52.241 00:06:52.241 00:06:52.241 real 0m1.603s 00:06:52.241 user 0m1.416s 00:06:52.241 sys 0m1.225s 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.241 ************************************ 00:06:52.241 END TEST dd_copy_to_out_bdev 00:06:52.241 ************************************ 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.241 ************************************ 00:06:52.241 START TEST dd_offset_magic 00:06:52.241 ************************************ 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:52.241 21:33:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:52.241 [2024-12-10 21:33:52.952524] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:52.241 [2024-12-10 21:33:52.953284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:06:52.241 { 00:06:52.241 "subsystems": [ 00:06:52.241 { 00:06:52.241 "subsystem": "bdev", 00:06:52.241 "config": [ 00:06:52.241 { 00:06:52.241 "params": { 00:06:52.241 "trtype": "pcie", 00:06:52.241 "traddr": "0000:00:10.0", 00:06:52.241 "name": "Nvme0" 00:06:52.241 }, 00:06:52.241 "method": "bdev_nvme_attach_controller" 00:06:52.241 }, 00:06:52.241 { 00:06:52.241 "params": { 00:06:52.241 "trtype": "pcie", 00:06:52.241 "traddr": "0000:00:11.0", 00:06:52.241 "name": "Nvme1" 00:06:52.241 }, 00:06:52.241 "method": "bdev_nvme_attach_controller" 00:06:52.241 }, 00:06:52.241 { 00:06:52.241 "method": "bdev_wait_for_examine" 00:06:52.241 } 00:06:52.241 ] 00:06:52.241 } 00:06:52.241 ] 00:06:52.241 } 00:06:52.499 [2024-12-10 21:33:53.101396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.499 [2024-12-10 21:33:53.150609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.499 [2024-12-10 21:33:53.187120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.757  [2024-12-10T21:33:53.798Z] Copying: 65/65 [MB] (average 1585 MBps) 00:06:53.015 00:06:53.015 21:33:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:53.015 21:33:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:53.015 21:33:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:53.015 21:33:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:53.015 [2024-12-10 21:33:53.664850] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:53.015 [2024-12-10 21:33:53.664942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61000 ] 00:06:53.015 { 00:06:53.015 "subsystems": [ 00:06:53.015 { 00:06:53.015 "subsystem": "bdev", 00:06:53.015 "config": [ 00:06:53.015 { 00:06:53.015 "params": { 00:06:53.015 "trtype": "pcie", 00:06:53.015 "traddr": "0000:00:10.0", 00:06:53.015 "name": "Nvme0" 00:06:53.015 }, 00:06:53.015 "method": "bdev_nvme_attach_controller" 00:06:53.015 }, 00:06:53.015 { 00:06:53.015 "params": { 00:06:53.015 "trtype": "pcie", 00:06:53.015 "traddr": "0000:00:11.0", 00:06:53.015 "name": "Nvme1" 00:06:53.015 }, 00:06:53.015 "method": "bdev_nvme_attach_controller" 00:06:53.015 }, 00:06:53.015 { 00:06:53.015 "method": "bdev_wait_for_examine" 00:06:53.015 } 00:06:53.015 ] 00:06:53.015 } 00:06:53.015 ] 00:06:53.015 } 00:06:53.273 [2024-12-10 21:33:53.818997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.273 [2024-12-10 21:33:53.853077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.273 [2024-12-10 21:33:53.884614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.273  [2024-12-10T21:33:54.314Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:53.531 00:06:53.531 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:53.531 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:53.531 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:53.531 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:53.531 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:53.531 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:53.531 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:53.531 { 00:06:53.531 "subsystems": [ 00:06:53.531 { 00:06:53.531 "subsystem": "bdev", 00:06:53.531 "config": [ 00:06:53.531 { 00:06:53.531 "params": { 00:06:53.531 "trtype": "pcie", 00:06:53.531 "traddr": "0000:00:10.0", 00:06:53.531 "name": "Nvme0" 00:06:53.531 }, 00:06:53.531 "method": "bdev_nvme_attach_controller" 00:06:53.531 }, 00:06:53.531 { 00:06:53.531 "params": { 00:06:53.531 "trtype": "pcie", 00:06:53.531 "traddr": "0000:00:11.0", 00:06:53.531 "name": "Nvme1" 00:06:53.531 }, 00:06:53.531 "method": "bdev_nvme_attach_controller" 00:06:53.531 }, 00:06:53.531 { 00:06:53.531 "method": "bdev_wait_for_examine" 00:06:53.531 } 00:06:53.531 ] 00:06:53.531 } 00:06:53.531 ] 00:06:53.531 } 00:06:53.531 [2024-12-10 21:33:54.257510] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:53.531 [2024-12-10 21:33:54.257646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61022 ] 00:06:53.789 [2024-12-10 21:33:54.407522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.789 [2024-12-10 21:33:54.456721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.789 [2024-12-10 21:33:54.487873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.047  [2024-12-10T21:33:55.088Z] Copying: 65/65 [MB] (average 1413 MBps) 00:06:54.305 00:06:54.305 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:54.305 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:54.305 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:54.305 21:33:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:54.305 [2024-12-10 21:33:54.970358] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:54.306 [2024-12-10 21:33:54.970515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:06:54.306 { 00:06:54.306 "subsystems": [ 00:06:54.306 { 00:06:54.306 "subsystem": "bdev", 00:06:54.306 "config": [ 00:06:54.306 { 00:06:54.306 "params": { 00:06:54.306 "trtype": "pcie", 00:06:54.306 "traddr": "0000:00:10.0", 00:06:54.306 "name": "Nvme0" 00:06:54.306 }, 00:06:54.306 "method": "bdev_nvme_attach_controller" 00:06:54.306 }, 00:06:54.306 { 00:06:54.306 "params": { 00:06:54.306 "trtype": "pcie", 00:06:54.306 "traddr": "0000:00:11.0", 00:06:54.306 "name": "Nvme1" 00:06:54.306 }, 00:06:54.306 "method": "bdev_nvme_attach_controller" 00:06:54.306 }, 00:06:54.306 { 00:06:54.306 "method": "bdev_wait_for_examine" 00:06:54.306 } 00:06:54.306 ] 00:06:54.306 } 00:06:54.306 ] 00:06:54.306 } 00:06:54.565 [2024-12-10 21:33:55.117486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.565 [2024-12-10 21:33:55.161606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.565 [2024-12-10 21:33:55.199581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.823  [2024-12-10T21:33:55.606Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:54.823 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:54.823 00:06:54.823 real 0m2.625s 00:06:54.823 user 0m1.936s 00:06:54.823 sys 0m0.682s 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:54.823 ************************************ 00:06:54.823 END TEST dd_offset_magic 00:06:54.823 ************************************ 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:54.823 21:33:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.082 [2024-12-10 21:33:55.615585] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:55.082 [2024-12-10 21:33:55.615726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61068 ] 00:06:55.082 { 00:06:55.082 "subsystems": [ 00:06:55.082 { 00:06:55.082 "subsystem": "bdev", 00:06:55.082 "config": [ 00:06:55.082 { 00:06:55.082 "params": { 00:06:55.082 "trtype": "pcie", 00:06:55.082 "traddr": "0000:00:10.0", 00:06:55.082 "name": "Nvme0" 00:06:55.082 }, 00:06:55.082 "method": "bdev_nvme_attach_controller" 00:06:55.082 }, 00:06:55.082 { 00:06:55.082 "params": { 00:06:55.082 "trtype": "pcie", 00:06:55.082 "traddr": "0000:00:11.0", 00:06:55.082 "name": "Nvme1" 00:06:55.082 }, 00:06:55.082 "method": "bdev_nvme_attach_controller" 00:06:55.082 }, 00:06:55.082 { 00:06:55.082 "method": "bdev_wait_for_examine" 00:06:55.082 } 00:06:55.082 ] 00:06:55.082 } 00:06:55.082 ] 00:06:55.082 } 00:06:55.082 [2024-12-10 21:33:55.767058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.082 [2024-12-10 21:33:55.817955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.082 [2024-12-10 21:33:55.855676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.340  [2024-12-10T21:33:56.381Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:06:55.598 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:55.598 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.598 [2024-12-10 21:33:56.267533] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:55.598 [2024-12-10 21:33:56.268576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61089 ] 00:06:55.598 { 00:06:55.598 "subsystems": [ 00:06:55.598 { 00:06:55.598 "subsystem": "bdev", 00:06:55.598 "config": [ 00:06:55.598 { 00:06:55.598 "params": { 00:06:55.598 "trtype": "pcie", 00:06:55.598 "traddr": "0000:00:10.0", 00:06:55.598 "name": "Nvme0" 00:06:55.598 }, 00:06:55.598 "method": "bdev_nvme_attach_controller" 00:06:55.598 }, 00:06:55.598 { 00:06:55.598 "params": { 00:06:55.598 "trtype": "pcie", 00:06:55.598 "traddr": "0000:00:11.0", 00:06:55.598 "name": "Nvme1" 00:06:55.598 }, 00:06:55.598 "method": "bdev_nvme_attach_controller" 00:06:55.598 }, 00:06:55.598 { 00:06:55.598 "method": "bdev_wait_for_examine" 00:06:55.598 } 00:06:55.598 ] 00:06:55.598 } 00:06:55.598 ] 00:06:55.598 } 00:06:55.856 [2024-12-10 21:33:56.417112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.856 [2024-12-10 21:33:56.467315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.856 [2024-12-10 21:33:56.506026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.114  [2024-12-10T21:33:56.897Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:56.114 00:06:56.114 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:56.114 00:06:56.114 real 0m6.306s 00:06:56.114 user 0m4.680s 00:06:56.114 sys 0m2.735s 00:06:56.114 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.114 21:33:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:56.114 ************************************ 00:06:56.114 END TEST spdk_dd_bdev_to_bdev 00:06:56.114 ************************************ 00:06:56.114 21:33:56 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:56.114 21:33:56 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:56.114 21:33:56 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.114 21:33:56 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.114 21:33:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:56.114 ************************************ 00:06:56.114 START TEST spdk_dd_uring 00:06:56.114 ************************************ 00:06:56.114 21:33:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:56.372 * Looking for test storage... 00:06:56.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:56.372 21:33:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.372 21:33:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.372 21:33:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.372 --rc genhtml_branch_coverage=1 00:06:56.372 --rc genhtml_function_coverage=1 00:06:56.372 --rc genhtml_legend=1 00:06:56.372 --rc geninfo_all_blocks=1 00:06:56.372 --rc geninfo_unexecuted_blocks=1 00:06:56.372 00:06:56.372 ' 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.372 --rc genhtml_branch_coverage=1 00:06:56.372 --rc genhtml_function_coverage=1 00:06:56.372 --rc genhtml_legend=1 00:06:56.372 --rc geninfo_all_blocks=1 00:06:56.372 --rc geninfo_unexecuted_blocks=1 00:06:56.372 00:06:56.372 ' 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.372 --rc genhtml_branch_coverage=1 00:06:56.372 --rc genhtml_function_coverage=1 00:06:56.372 --rc genhtml_legend=1 00:06:56.372 --rc geninfo_all_blocks=1 00:06:56.372 --rc geninfo_unexecuted_blocks=1 00:06:56.372 00:06:56.372 ' 00:06:56.372 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.372 --rc genhtml_branch_coverage=1 00:06:56.372 --rc genhtml_function_coverage=1 00:06:56.372 --rc genhtml_legend=1 00:06:56.372 --rc geninfo_all_blocks=1 00:06:56.373 --rc geninfo_unexecuted_blocks=1 00:06:56.373 00:06:56.373 ' 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:56.373 ************************************ 00:06:56.373 START TEST dd_uring_copy 00:06:56.373 ************************************ 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=e6wbj3kendi57r5apm511xrjgbg8mawafgjc9s9t0wbsbv46zm7b763mty12fyoja1zt16mcrv4miye6ctz0wkdsbjgr6bwv9ejpl2toeayo892v8mpxb3tswqhgrvvr5itpst1zp1bd37igi2yc71zaa1rubjfxpwdge06hu2qkumz45b9k2rq1vwb3bvyv5xqp9pluxfih1c1d9b37a88423xcsnawd9nma1g2sczmmktbo7pm1579dvmgp5iepex73dt8tqmy7d96fczjixyda4bxfluytzzz5gpm1fb9uvbgmzu2wkkn63kygthpa34s0pftozz3s3pfe3bhebg0cmpp4vzfm64k909hhrb0pj2t40ijopge4orock7xrlwq8yf2xyy804atppe96tg8wh37isknima8h90xyv34r1nkm8swk3vrqdwvmzsz198yx7261gjojzmopoe3bog2o07b0lbuib98m9dmegx1ed2c70oxkr8p5w1wq7n2nszy3rlybkqldd8mwic2n49r7vuup0l2w1qlt0roq7xt0xcaaybasc3k2g4hkclat4p3o8zr1s0hatj8rem3loy52f3hsoo1by4afa4yxpep2lc6d8guyej5kr41o88yub6i2r1rhisryqh39vqcdtac1ziphcsqnfnd1mtf125tm169fti5v4kvuvjmysvrjihyk76yt8f96l7mzi7ne6eqmd01qz98f8n4liz36ma7qoh5ojd1uxrx4dkswbop9jqsqlpqn8fh6n2y5ct5y3lcd0e1qslcat0mcn485slocjukedtofu336bupk2c57047qhqpzgx9p1z9ypfgyayk6gh7c7907y9ff8liaumshb3z1a6vwqi9322qpgtn14zvlruoi4uyby1hyyoqf1i1bujb1qcmtim7ncxzsufv7b8ycno8u38fzavy5nj4hrh6l6znnkxuhmgacdxoc6ia7rofy0wib9dlog3ebbs6ave0gilby6mtzlcv2bpi 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo e6wbj3kendi57r5apm511xrjgbg8mawafgjc9s9t0wbsbv46zm7b763mty12fyoja1zt16mcrv4miye6ctz0wkdsbjgr6bwv9ejpl2toeayo892v8mpxb3tswqhgrvvr5itpst1zp1bd37igi2yc71zaa1rubjfxpwdge06hu2qkumz45b9k2rq1vwb3bvyv5xqp9pluxfih1c1d9b37a88423xcsnawd9nma1g2sczmmktbo7pm1579dvmgp5iepex73dt8tqmy7d96fczjixyda4bxfluytzzz5gpm1fb9uvbgmzu2wkkn63kygthpa34s0pftozz3s3pfe3bhebg0cmpp4vzfm64k909hhrb0pj2t40ijopge4orock7xrlwq8yf2xyy804atppe96tg8wh37isknima8h90xyv34r1nkm8swk3vrqdwvmzsz198yx7261gjojzmopoe3bog2o07b0lbuib98m9dmegx1ed2c70oxkr8p5w1wq7n2nszy3rlybkqldd8mwic2n49r7vuup0l2w1qlt0roq7xt0xcaaybasc3k2g4hkclat4p3o8zr1s0hatj8rem3loy52f3hsoo1by4afa4yxpep2lc6d8guyej5kr41o88yub6i2r1rhisryqh39vqcdtac1ziphcsqnfnd1mtf125tm169fti5v4kvuvjmysvrjihyk76yt8f96l7mzi7ne6eqmd01qz98f8n4liz36ma7qoh5ojd1uxrx4dkswbop9jqsqlpqn8fh6n2y5ct5y3lcd0e1qslcat0mcn485slocjukedtofu336bupk2c57047qhqpzgx9p1z9ypfgyayk6gh7c7907y9ff8liaumshb3z1a6vwqi9322qpgtn14zvlruoi4uyby1hyyoqf1i1bujb1qcmtim7ncxzsufv7b8ycno8u38fzavy5nj4hrh6l6znnkxuhmgacdxoc6ia7rofy0wib9dlog3ebbs6ave0gilby6mtzlcv2bpi 00:06:56.373 21:33:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:56.631 [2024-12-10 21:33:57.192845] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:56.631 [2024-12-10 21:33:57.192992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ] 00:06:56.631 [2024-12-10 21:33:57.343191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.631 [2024-12-10 21:33:57.393260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.889 [2024-12-10 21:33:57.430769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.454  [2024-12-10T21:33:58.237Z] Copying: 511/511 [MB] (average 1283 MBps) 00:06:57.454 00:06:57.454 21:33:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:57.454 21:33:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:57.712 21:33:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:57.712 21:33:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.712 [2024-12-10 21:33:58.281087] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:06:57.712 [2024-12-10 21:33:58.281203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61185 ] 00:06:57.712 { 00:06:57.712 "subsystems": [ 00:06:57.712 { 00:06:57.712 "subsystem": "bdev", 00:06:57.712 "config": [ 00:06:57.712 { 00:06:57.712 "params": { 00:06:57.712 "block_size": 512, 00:06:57.712 "num_blocks": 1048576, 00:06:57.712 "name": "malloc0" 00:06:57.712 }, 00:06:57.712 "method": "bdev_malloc_create" 00:06:57.712 }, 00:06:57.712 { 00:06:57.712 "params": { 00:06:57.712 "filename": "/dev/zram1", 00:06:57.712 "name": "uring0" 00:06:57.712 }, 00:06:57.712 "method": "bdev_uring_create" 00:06:57.712 }, 00:06:57.712 { 00:06:57.712 "method": "bdev_wait_for_examine" 00:06:57.712 } 00:06:57.712 ] 00:06:57.712 } 00:06:57.712 ] 00:06:57.712 } 00:06:57.712 [2024-12-10 21:33:58.428577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.712 [2024-12-10 21:33:58.481663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.998 [2024-12-10 21:33:58.522806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.932  [2024-12-10T21:34:01.087Z] Copying: 147/512 [MB] (147 MBps) [2024-12-10T21:34:02.019Z] Copying: 315/512 [MB] (168 MBps) [2024-12-10T21:34:02.019Z] Copying: 485/512 [MB] (170 MBps) [2024-12-10T21:34:02.278Z] Copying: 512/512 [MB] (average 161 MBps) 00:07:01.495 00:07:01.495 21:34:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:01.495 21:34:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:01.495 21:34:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.495 21:34:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.495 [2024-12-10 21:34:02.113669] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:01.495 [2024-12-10 21:34:02.113807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:07:01.495 { 00:07:01.495 "subsystems": [ 00:07:01.495 { 00:07:01.495 "subsystem": "bdev", 00:07:01.495 "config": [ 00:07:01.495 { 00:07:01.495 "params": { 00:07:01.495 "block_size": 512, 00:07:01.495 "num_blocks": 1048576, 00:07:01.495 "name": "malloc0" 00:07:01.495 }, 00:07:01.495 "method": "bdev_malloc_create" 00:07:01.495 }, 00:07:01.495 { 00:07:01.495 "params": { 00:07:01.495 "filename": "/dev/zram1", 00:07:01.495 "name": "uring0" 00:07:01.495 }, 00:07:01.495 "method": "bdev_uring_create" 00:07:01.495 }, 00:07:01.495 { 00:07:01.495 "method": "bdev_wait_for_examine" 00:07:01.495 } 00:07:01.495 ] 00:07:01.495 } 00:07:01.495 ] 00:07:01.495 } 00:07:01.495 [2024-12-10 21:34:02.256261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.753 [2024-12-10 21:34:02.290392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.753 [2024-12-10 21:34:02.320982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.687  [2024-12-10T21:34:04.843Z] Copying: 154/512 [MB] (154 MBps) [2024-12-10T21:34:05.775Z] Copying: 276/512 [MB] (122 MBps) [2024-12-10T21:34:06.342Z] Copying: 432/512 [MB] (155 MBps) [2024-12-10T21:34:06.342Z] Copying: 512/512 [MB] (average 139 MBps) 00:07:05.559 00:07:05.559 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:05.559 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ e6wbj3kendi57r5apm511xrjgbg8mawafgjc9s9t0wbsbv46zm7b763mty12fyoja1zt16mcrv4miye6ctz0wkdsbjgr6bwv9ejpl2toeayo892v8mpxb3tswqhgrvvr5itpst1zp1bd37igi2yc71zaa1rubjfxpwdge06hu2qkumz45b9k2rq1vwb3bvyv5xqp9pluxfih1c1d9b37a88423xcsnawd9nma1g2sczmmktbo7pm1579dvmgp5iepex73dt8tqmy7d96fczjixyda4bxfluytzzz5gpm1fb9uvbgmzu2wkkn63kygthpa34s0pftozz3s3pfe3bhebg0cmpp4vzfm64k909hhrb0pj2t40ijopge4orock7xrlwq8yf2xyy804atppe96tg8wh37isknima8h90xyv34r1nkm8swk3vrqdwvmzsz198yx7261gjojzmopoe3bog2o07b0lbuib98m9dmegx1ed2c70oxkr8p5w1wq7n2nszy3rlybkqldd8mwic2n49r7vuup0l2w1qlt0roq7xt0xcaaybasc3k2g4hkclat4p3o8zr1s0hatj8rem3loy52f3hsoo1by4afa4yxpep2lc6d8guyej5kr41o88yub6i2r1rhisryqh39vqcdtac1ziphcsqnfnd1mtf125tm169fti5v4kvuvjmysvrjihyk76yt8f96l7mzi7ne6eqmd01qz98f8n4liz36ma7qoh5ojd1uxrx4dkswbop9jqsqlpqn8fh6n2y5ct5y3lcd0e1qslcat0mcn485slocjukedtofu336bupk2c57047qhqpzgx9p1z9ypfgyayk6gh7c7907y9ff8liaumshb3z1a6vwqi9322qpgtn14zvlruoi4uyby1hyyoqf1i1bujb1qcmtim7ncxzsufv7b8ycno8u38fzavy5nj4hrh6l6znnkxuhmgacdxoc6ia7rofy0wib9dlog3ebbs6ave0gilby6mtzlcv2bpi == \e\6\w\b\j\3\k\e\n\d\i\5\7\r\5\a\p\m\5\1\1\x\r\j\g\b\g\8\m\a\w\a\f\g\j\c\9\s\9\t\0\w\b\s\b\v\4\6\z\m\7\b\7\6\3\m\t\y\1\2\f\y\o\j\a\1\z\t\1\6\m\c\r\v\4\m\i\y\e\6\c\t\z\0\w\k\d\s\b\j\g\r\6\b\w\v\9\e\j\p\l\2\t\o\e\a\y\o\8\9\2\v\8\m\p\x\b\3\t\s\w\q\h\g\r\v\v\r\5\i\t\p\s\t\1\z\p\1\b\d\3\7\i\g\i\2\y\c\7\1\z\a\a\1\r\u\b\j\f\x\p\w\d\g\e\0\6\h\u\2\q\k\u\m\z\4\5\b\9\k\2\r\q\1\v\w\b\3\b\v\y\v\5\x\q\p\9\p\l\u\x\f\i\h\1\c\1\d\9\b\3\7\a\8\8\4\2\3\x\c\s\n\a\w\d\9\n\m\a\1\g\2\s\c\z\m\m\k\t\b\o\7\p\m\1\5\7\9\d\v\m\g\p\5\i\e\p\e\x\7\3\d\t\8\t\q\m\y\7\d\9\6\f\c\z\j\i\x\y\d\a\4\b\x\f\l\u\y\t\z\z\z\5\g\p\m\1\f\b\9\u\v\b\g\m\z\u\2\w\k\k\n\6\3\k\y\g\t\h\p\a\3\4\s\0\p\f\t\o\z\z\3\s\3\p\f\e\3\b\h\e\b\g\0\c\m\p\p\4\v\z\f\m\6\4\k\9\0\9\h\h\r\b\0\p\j\2\t\4\0\i\j\o\p\g\e\4\o\r\o\c\k\7\x\r\l\w\q\8\y\f\2\x\y\y\8\0\4\a\t\p\p\e\9\6\t\g\8\w\h\3\7\i\s\k\n\i\m\a\8\h\9\0\x\y\v\3\4\r\1\n\k\m\8\s\w\k\3\v\r\q\d\w\v\m\z\s\z\1\9\8\y\x\7\2\6\1\g\j\o\j\z\m\o\p\o\e\3\b\o\g\2\o\0\7\b\0\l\b\u\i\b\9\8\m\9\d\m\e\g\x\1\e\d\2\c\7\0\o\x\k\r\8\p\5\w\1\w\q\7\n\2\n\s\z\y\3\r\l\y\b\k\q\l\d\d\8\m\w\i\c\2\n\4\9\r\7\v\u\u\p\0\l\2\w\1\q\l\t\0\r\o\q\7\x\t\0\x\c\a\a\y\b\a\s\c\3\k\2\g\4\h\k\c\l\a\t\4\p\3\o\8\z\r\1\s\0\h\a\t\j\8\r\e\m\3\l\o\y\5\2\f\3\h\s\o\o\1\b\y\4\a\f\a\4\y\x\p\e\p\2\l\c\6\d\8\g\u\y\e\j\5\k\r\4\1\o\8\8\y\u\b\6\i\2\r\1\r\h\i\s\r\y\q\h\3\9\v\q\c\d\t\a\c\1\z\i\p\h\c\s\q\n\f\n\d\1\m\t\f\1\2\5\t\m\1\6\9\f\t\i\5\v\4\k\v\u\v\j\m\y\s\v\r\j\i\h\y\k\7\6\y\t\8\f\9\6\l\7\m\z\i\7\n\e\6\e\q\m\d\0\1\q\z\9\8\f\8\n\4\l\i\z\3\6\m\a\7\q\o\h\5\o\j\d\1\u\x\r\x\4\d\k\s\w\b\o\p\9\j\q\s\q\l\p\q\n\8\f\h\6\n\2\y\5\c\t\5\y\3\l\c\d\0\e\1\q\s\l\c\a\t\0\m\c\n\4\8\5\s\l\o\c\j\u\k\e\d\t\o\f\u\3\3\6\b\u\p\k\2\c\5\7\0\4\7\q\h\q\p\z\g\x\9\p\1\z\9\y\p\f\g\y\a\y\k\6\g\h\7\c\7\9\0\7\y\9\f\f\8\l\i\a\u\m\s\h\b\3\z\1\a\6\v\w\q\i\9\3\2\2\q\p\g\t\n\1\4\z\v\l\r\u\o\i\4\u\y\b\y\1\h\y\y\o\q\f\1\i\1\b\u\j\b\1\q\c\m\t\i\m\7\n\c\x\z\s\u\f\v\7\b\8\y\c\n\o\8\u\3\8\f\z\a\v\y\5\n\j\4\h\r\h\6\l\6\z\n\n\k\x\u\h\m\g\a\c\d\x\o\c\6\i\a\7\r\o\f\y\0\w\i\b\9\d\l\o\g\3\e\b\b\s\6\a\v\e\0\g\i\l\b\y\6\m\t\z\l\c\v\2\b\p\i ]] 00:07:05.559 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:05.560 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ e6wbj3kendi57r5apm511xrjgbg8mawafgjc9s9t0wbsbv46zm7b763mty12fyoja1zt16mcrv4miye6ctz0wkdsbjgr6bwv9ejpl2toeayo892v8mpxb3tswqhgrvvr5itpst1zp1bd37igi2yc71zaa1rubjfxpwdge06hu2qkumz45b9k2rq1vwb3bvyv5xqp9pluxfih1c1d9b37a88423xcsnawd9nma1g2sczmmktbo7pm1579dvmgp5iepex73dt8tqmy7d96fczjixyda4bxfluytzzz5gpm1fb9uvbgmzu2wkkn63kygthpa34s0pftozz3s3pfe3bhebg0cmpp4vzfm64k909hhrb0pj2t40ijopge4orock7xrlwq8yf2xyy804atppe96tg8wh37isknima8h90xyv34r1nkm8swk3vrqdwvmzsz198yx7261gjojzmopoe3bog2o07b0lbuib98m9dmegx1ed2c70oxkr8p5w1wq7n2nszy3rlybkqldd8mwic2n49r7vuup0l2w1qlt0roq7xt0xcaaybasc3k2g4hkclat4p3o8zr1s0hatj8rem3loy52f3hsoo1by4afa4yxpep2lc6d8guyej5kr41o88yub6i2r1rhisryqh39vqcdtac1ziphcsqnfnd1mtf125tm169fti5v4kvuvjmysvrjihyk76yt8f96l7mzi7ne6eqmd01qz98f8n4liz36ma7qoh5ojd1uxrx4dkswbop9jqsqlpqn8fh6n2y5ct5y3lcd0e1qslcat0mcn485slocjukedtofu336bupk2c57047qhqpzgx9p1z9ypfgyayk6gh7c7907y9ff8liaumshb3z1a6vwqi9322qpgtn14zvlruoi4uyby1hyyoqf1i1bujb1qcmtim7ncxzsufv7b8ycno8u38fzavy5nj4hrh6l6znnkxuhmgacdxoc6ia7rofy0wib9dlog3ebbs6ave0gilby6mtzlcv2bpi == \e\6\w\b\j\3\k\e\n\d\i\5\7\r\5\a\p\m\5\1\1\x\r\j\g\b\g\8\m\a\w\a\f\g\j\c\9\s\9\t\0\w\b\s\b\v\4\6\z\m\7\b\7\6\3\m\t\y\1\2\f\y\o\j\a\1\z\t\1\6\m\c\r\v\4\m\i\y\e\6\c\t\z\0\w\k\d\s\b\j\g\r\6\b\w\v\9\e\j\p\l\2\t\o\e\a\y\o\8\9\2\v\8\m\p\x\b\3\t\s\w\q\h\g\r\v\v\r\5\i\t\p\s\t\1\z\p\1\b\d\3\7\i\g\i\2\y\c\7\1\z\a\a\1\r\u\b\j\f\x\p\w\d\g\e\0\6\h\u\2\q\k\u\m\z\4\5\b\9\k\2\r\q\1\v\w\b\3\b\v\y\v\5\x\q\p\9\p\l\u\x\f\i\h\1\c\1\d\9\b\3\7\a\8\8\4\2\3\x\c\s\n\a\w\d\9\n\m\a\1\g\2\s\c\z\m\m\k\t\b\o\7\p\m\1\5\7\9\d\v\m\g\p\5\i\e\p\e\x\7\3\d\t\8\t\q\m\y\7\d\9\6\f\c\z\j\i\x\y\d\a\4\b\x\f\l\u\y\t\z\z\z\5\g\p\m\1\f\b\9\u\v\b\g\m\z\u\2\w\k\k\n\6\3\k\y\g\t\h\p\a\3\4\s\0\p\f\t\o\z\z\3\s\3\p\f\e\3\b\h\e\b\g\0\c\m\p\p\4\v\z\f\m\6\4\k\9\0\9\h\h\r\b\0\p\j\2\t\4\0\i\j\o\p\g\e\4\o\r\o\c\k\7\x\r\l\w\q\8\y\f\2\x\y\y\8\0\4\a\t\p\p\e\9\6\t\g\8\w\h\3\7\i\s\k\n\i\m\a\8\h\9\0\x\y\v\3\4\r\1\n\k\m\8\s\w\k\3\v\r\q\d\w\v\m\z\s\z\1\9\8\y\x\7\2\6\1\g\j\o\j\z\m\o\p\o\e\3\b\o\g\2\o\0\7\b\0\l\b\u\i\b\9\8\m\9\d\m\e\g\x\1\e\d\2\c\7\0\o\x\k\r\8\p\5\w\1\w\q\7\n\2\n\s\z\y\3\r\l\y\b\k\q\l\d\d\8\m\w\i\c\2\n\4\9\r\7\v\u\u\p\0\l\2\w\1\q\l\t\0\r\o\q\7\x\t\0\x\c\a\a\y\b\a\s\c\3\k\2\g\4\h\k\c\l\a\t\4\p\3\o\8\z\r\1\s\0\h\a\t\j\8\r\e\m\3\l\o\y\5\2\f\3\h\s\o\o\1\b\y\4\a\f\a\4\y\x\p\e\p\2\l\c\6\d\8\g\u\y\e\j\5\k\r\4\1\o\8\8\y\u\b\6\i\2\r\1\r\h\i\s\r\y\q\h\3\9\v\q\c\d\t\a\c\1\z\i\p\h\c\s\q\n\f\n\d\1\m\t\f\1\2\5\t\m\1\6\9\f\t\i\5\v\4\k\v\u\v\j\m\y\s\v\r\j\i\h\y\k\7\6\y\t\8\f\9\6\l\7\m\z\i\7\n\e\6\e\q\m\d\0\1\q\z\9\8\f\8\n\4\l\i\z\3\6\m\a\7\q\o\h\5\o\j\d\1\u\x\r\x\4\d\k\s\w\b\o\p\9\j\q\s\q\l\p\q\n\8\f\h\6\n\2\y\5\c\t\5\y\3\l\c\d\0\e\1\q\s\l\c\a\t\0\m\c\n\4\8\5\s\l\o\c\j\u\k\e\d\t\o\f\u\3\3\6\b\u\p\k\2\c\5\7\0\4\7\q\h\q\p\z\g\x\9\p\1\z\9\y\p\f\g\y\a\y\k\6\g\h\7\c\7\9\0\7\y\9\f\f\8\l\i\a\u\m\s\h\b\3\z\1\a\6\v\w\q\i\9\3\2\2\q\p\g\t\n\1\4\z\v\l\r\u\o\i\4\u\y\b\y\1\h\y\y\o\q\f\1\i\1\b\u\j\b\1\q\c\m\t\i\m\7\n\c\x\z\s\u\f\v\7\b\8\y\c\n\o\8\u\3\8\f\z\a\v\y\5\n\j\4\h\r\h\6\l\6\z\n\n\k\x\u\h\m\g\a\c\d\x\o\c\6\i\a\7\r\o\f\y\0\w\i\b\9\d\l\o\g\3\e\b\b\s\6\a\v\e\0\g\i\l\b\y\6\m\t\z\l\c\v\2\b\p\i ]] 00:07:05.560 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:06.125 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:06.125 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:06.125 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:06.125 21:34:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.125 [2024-12-10 21:34:06.775410] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:06.125 [2024-12-10 21:34:06.775525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61310 ] 00:07:06.125 { 00:07:06.125 "subsystems": [ 00:07:06.125 { 00:07:06.125 "subsystem": "bdev", 00:07:06.125 "config": [ 00:07:06.125 { 00:07:06.125 "params": { 00:07:06.125 "block_size": 512, 00:07:06.125 "num_blocks": 1048576, 00:07:06.125 "name": "malloc0" 00:07:06.125 }, 00:07:06.125 "method": "bdev_malloc_create" 00:07:06.125 }, 00:07:06.125 { 00:07:06.125 "params": { 00:07:06.125 "filename": "/dev/zram1", 00:07:06.125 "name": "uring0" 00:07:06.125 }, 00:07:06.125 "method": "bdev_uring_create" 00:07:06.125 }, 00:07:06.125 { 00:07:06.125 "method": "bdev_wait_for_examine" 00:07:06.126 } 00:07:06.126 ] 00:07:06.126 } 00:07:06.126 ] 00:07:06.126 } 00:07:06.384 [2024-12-10 21:34:06.917600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.384 [2024-12-10 21:34:06.967204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.384 [2024-12-10 21:34:07.005317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.758  [2024-12-10T21:34:09.475Z] Copying: 139/512 [MB] (139 MBps) [2024-12-10T21:34:10.408Z] Copying: 275/512 [MB] (135 MBps) [2024-12-10T21:34:10.975Z] Copying: 413/512 [MB] (138 MBps) [2024-12-10T21:34:11.234Z] Copying: 512/512 [MB] (average 138 MBps) 00:07:10.451 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:10.451 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:10.451 [2024-12-10 21:34:11.172260] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:10.451 [2024-12-10 21:34:11.172393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61372 ] 00:07:10.451 { 00:07:10.451 "subsystems": [ 00:07:10.451 { 00:07:10.451 "subsystem": "bdev", 00:07:10.451 "config": [ 00:07:10.451 { 00:07:10.451 "params": { 00:07:10.451 "block_size": 512, 00:07:10.451 "num_blocks": 1048576, 00:07:10.451 "name": "malloc0" 00:07:10.451 }, 00:07:10.451 "method": "bdev_malloc_create" 00:07:10.451 }, 00:07:10.451 { 00:07:10.451 "params": { 00:07:10.451 "filename": "/dev/zram1", 00:07:10.451 "name": "uring0" 00:07:10.451 }, 00:07:10.451 "method": "bdev_uring_create" 00:07:10.451 }, 00:07:10.451 { 00:07:10.451 "params": { 00:07:10.451 "name": "uring0" 00:07:10.451 }, 00:07:10.451 "method": "bdev_uring_delete" 00:07:10.451 }, 00:07:10.451 { 00:07:10.451 "method": "bdev_wait_for_examine" 00:07:10.451 } 00:07:10.451 ] 00:07:10.451 } 00:07:10.451 ] 00:07:10.451 } 00:07:10.710 [2024-12-10 21:34:11.342869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.710 [2024-12-10 21:34:11.406138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.710 [2024-12-10 21:34:11.452577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.969  [2024-12-10T21:34:12.039Z] Copying: 0/0 [B] (average 0 Bps) 00:07:11.256 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:11.256 21:34:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:11.256 { 00:07:11.256 "subsystems": [ 00:07:11.256 { 00:07:11.256 "subsystem": "bdev", 00:07:11.256 "config": [ 00:07:11.256 { 00:07:11.256 "params": { 00:07:11.256 "block_size": 512, 00:07:11.256 "num_blocks": 1048576, 00:07:11.256 "name": "malloc0" 00:07:11.256 }, 00:07:11.256 "method": "bdev_malloc_create" 00:07:11.256 }, 00:07:11.256 { 00:07:11.256 "params": { 00:07:11.256 "filename": "/dev/zram1", 00:07:11.256 "name": "uring0" 00:07:11.256 }, 00:07:11.256 "method": "bdev_uring_create" 00:07:11.256 }, 00:07:11.256 { 00:07:11.256 "params": { 00:07:11.256 "name": "uring0" 00:07:11.256 }, 00:07:11.256 "method": "bdev_uring_delete" 00:07:11.256 }, 00:07:11.256 { 00:07:11.256 "method": "bdev_wait_for_examine" 00:07:11.256 } 00:07:11.256 ] 00:07:11.256 } 00:07:11.256 ] 00:07:11.256 } 00:07:11.256 [2024-12-10 21:34:11.886697] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:11.256 [2024-12-10 21:34:11.886822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61400 ] 00:07:11.256 [2024-12-10 21:34:12.037043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.515 [2024-12-10 21:34:12.086646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.515 [2024-12-10 21:34:12.120904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.515 [2024-12-10 21:34:12.245884] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:11.515 [2024-12-10 21:34:12.245946] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:11.515 [2024-12-10 21:34:12.245957] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:07:11.515 [2024-12-10 21:34:12.245968] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.773 [2024-12-10 21:34:12.423694] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:11.773 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:12.032 00:07:12.032 real 0m15.657s 00:07:12.032 user 0m10.593s 00:07:12.032 sys 0m14.506s 00:07:12.032 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.032 ************************************ 00:07:12.032 END TEST dd_uring_copy 00:07:12.032 21:34:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.032 ************************************ 00:07:12.032 00:07:12.032 real 0m15.897s 00:07:12.032 user 0m10.754s 00:07:12.032 sys 0m14.589s 00:07:12.032 21:34:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.032 21:34:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:12.032 ************************************ 00:07:12.032 END TEST spdk_dd_uring 00:07:12.032 ************************************ 00:07:12.292 21:34:12 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:12.292 21:34:12 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.292 21:34:12 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.292 21:34:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 ************************************ 00:07:12.292 START TEST spdk_dd_sparse 00:07:12.292 ************************************ 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:12.292 * Looking for test storage... 00:07:12.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.292 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.293 --rc genhtml_branch_coverage=1 00:07:12.293 --rc genhtml_function_coverage=1 00:07:12.293 --rc genhtml_legend=1 00:07:12.293 --rc geninfo_all_blocks=1 00:07:12.293 --rc geninfo_unexecuted_blocks=1 00:07:12.293 00:07:12.293 ' 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.293 --rc genhtml_branch_coverage=1 00:07:12.293 --rc genhtml_function_coverage=1 00:07:12.293 --rc genhtml_legend=1 00:07:12.293 --rc geninfo_all_blocks=1 00:07:12.293 --rc geninfo_unexecuted_blocks=1 00:07:12.293 00:07:12.293 ' 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.293 --rc genhtml_branch_coverage=1 00:07:12.293 --rc genhtml_function_coverage=1 00:07:12.293 --rc genhtml_legend=1 00:07:12.293 --rc geninfo_all_blocks=1 00:07:12.293 --rc geninfo_unexecuted_blocks=1 00:07:12.293 00:07:12.293 ' 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.293 --rc genhtml_branch_coverage=1 00:07:12.293 --rc genhtml_function_coverage=1 00:07:12.293 --rc genhtml_legend=1 00:07:12.293 --rc geninfo_all_blocks=1 00:07:12.293 --rc geninfo_unexecuted_blocks=1 00:07:12.293 00:07:12.293 ' 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:12.293 21:34:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:12.293 1+0 records in 00:07:12.293 1+0 records out 00:07:12.293 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00559415 s, 750 MB/s 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:12.293 1+0 records in 00:07:12.293 1+0 records out 00:07:12.293 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00517161 s, 811 MB/s 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:12.293 1+0 records in 00:07:12.293 1+0 records out 00:07:12.293 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00624228 s, 672 MB/s 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:12.293 ************************************ 00:07:12.293 START TEST dd_sparse_file_to_file 00:07:12.293 ************************************ 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:12.293 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:12.552 [2024-12-10 21:34:13.102276] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:12.552 [2024-12-10 21:34:13.102400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61500 ] 00:07:12.552 { 00:07:12.552 "subsystems": [ 00:07:12.552 { 00:07:12.552 "subsystem": "bdev", 00:07:12.552 "config": [ 00:07:12.552 { 00:07:12.552 "params": { 00:07:12.552 "block_size": 4096, 00:07:12.552 "filename": "dd_sparse_aio_disk", 00:07:12.552 "name": "dd_aio" 00:07:12.552 }, 00:07:12.552 "method": "bdev_aio_create" 00:07:12.552 }, 00:07:12.552 { 00:07:12.552 "params": { 00:07:12.552 "lvs_name": "dd_lvstore", 00:07:12.552 "bdev_name": "dd_aio" 00:07:12.552 }, 00:07:12.552 "method": "bdev_lvol_create_lvstore" 00:07:12.552 }, 00:07:12.552 { 00:07:12.552 "method": "bdev_wait_for_examine" 00:07:12.552 } 00:07:12.552 ] 00:07:12.552 } 00:07:12.552 ] 00:07:12.552 } 00:07:12.552 [2024-12-10 21:34:13.256540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.552 [2024-12-10 21:34:13.303721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.811 [2024-12-10 21:34:13.335075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.811  [2024-12-10T21:34:13.594Z] Copying: 12/36 [MB] (average 1090 MBps) 00:07:12.811 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:12.811 00:07:12.811 real 0m0.557s 00:07:12.811 user 0m0.341s 00:07:12.811 sys 0m0.266s 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.811 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:12.811 ************************************ 00:07:12.811 END TEST dd_sparse_file_to_file 00:07:12.811 ************************************ 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:13.070 ************************************ 00:07:13.070 START TEST dd_sparse_file_to_bdev 00:07:13.070 ************************************ 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:13.070 21:34:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.070 [2024-12-10 21:34:13.698408] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:13.070 [2024-12-10 21:34:13.698571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61542 ] 00:07:13.070 { 00:07:13.070 "subsystems": [ 00:07:13.070 { 00:07:13.070 "subsystem": "bdev", 00:07:13.070 "config": [ 00:07:13.070 { 00:07:13.070 "params": { 00:07:13.070 "block_size": 4096, 00:07:13.070 "filename": "dd_sparse_aio_disk", 00:07:13.070 "name": "dd_aio" 00:07:13.070 }, 00:07:13.070 "method": "bdev_aio_create" 00:07:13.070 }, 00:07:13.070 { 00:07:13.070 "params": { 00:07:13.070 "lvs_name": "dd_lvstore", 00:07:13.070 "lvol_name": "dd_lvol", 00:07:13.070 "size_in_mib": 36, 00:07:13.070 "thin_provision": true 00:07:13.070 }, 00:07:13.070 "method": "bdev_lvol_create" 00:07:13.070 }, 00:07:13.070 { 00:07:13.070 "method": "bdev_wait_for_examine" 00:07:13.070 } 00:07:13.070 ] 00:07:13.070 } 00:07:13.070 ] 00:07:13.070 } 00:07:13.070 [2024-12-10 21:34:13.848530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.330 [2024-12-10 21:34:13.898516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.330 [2024-12-10 21:34:13.934762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.330  [2024-12-10T21:34:14.371Z] Copying: 12/36 [MB] (average 705 MBps) 00:07:13.588 00:07:13.588 00:07:13.588 real 0m0.550s 00:07:13.588 user 0m0.337s 00:07:13.588 sys 0m0.268s 00:07:13.588 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.588 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.588 ************************************ 00:07:13.588 END TEST dd_sparse_file_to_bdev 00:07:13.588 ************************************ 00:07:13.588 21:34:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:13.588 21:34:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.588 21:34:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.588 21:34:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:13.589 ************************************ 00:07:13.589 START TEST dd_sparse_bdev_to_file 00:07:13.589 ************************************ 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:13.589 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:13.589 { 00:07:13.589 "subsystems": [ 00:07:13.589 { 00:07:13.589 "subsystem": "bdev", 00:07:13.589 "config": [ 00:07:13.589 { 00:07:13.589 "params": { 00:07:13.589 "block_size": 4096, 00:07:13.589 "filename": "dd_sparse_aio_disk", 00:07:13.589 "name": "dd_aio" 00:07:13.589 }, 00:07:13.589 "method": "bdev_aio_create" 00:07:13.589 }, 00:07:13.589 { 00:07:13.589 "method": "bdev_wait_for_examine" 00:07:13.589 } 00:07:13.589 ] 00:07:13.589 } 00:07:13.589 ] 00:07:13.589 } 00:07:13.589 [2024-12-10 21:34:14.291868] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:13.589 [2024-12-10 21:34:14.291999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61580 ] 00:07:13.847 [2024-12-10 21:34:14.437329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.847 [2024-12-10 21:34:14.485328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.847 [2024-12-10 21:34:14.524001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.847  [2024-12-10T21:34:14.889Z] Copying: 12/36 [MB] (average 1090 MBps) 00:07:14.106 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:14.106 ************************************ 00:07:14.106 END TEST dd_sparse_bdev_to_file 00:07:14.106 ************************************ 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:14.106 00:07:14.106 real 0m0.520s 00:07:14.106 user 0m0.315s 00:07:14.106 sys 0m0.239s 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:14.106 ************************************ 00:07:14.106 END TEST spdk_dd_sparse 00:07:14.106 ************************************ 00:07:14.106 00:07:14.106 real 0m1.974s 00:07:14.106 user 0m1.155s 00:07:14.106 sys 0m0.958s 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.106 21:34:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:14.106 21:34:14 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:14.106 21:34:14 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.106 21:34:14 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.106 21:34:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.106 ************************************ 00:07:14.106 START TEST spdk_dd_negative 00:07:14.106 ************************************ 00:07:14.106 21:34:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:14.365 * Looking for test storage... 00:07:14.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.365 21:34:14 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:14.365 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.365 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.365 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.365 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:14.365 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.366 --rc genhtml_branch_coverage=1 00:07:14.366 --rc genhtml_function_coverage=1 00:07:14.366 --rc genhtml_legend=1 00:07:14.366 --rc geninfo_all_blocks=1 00:07:14.366 --rc geninfo_unexecuted_blocks=1 00:07:14.366 00:07:14.366 ' 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.366 --rc genhtml_branch_coverage=1 00:07:14.366 --rc genhtml_function_coverage=1 00:07:14.366 --rc genhtml_legend=1 00:07:14.366 --rc geninfo_all_blocks=1 00:07:14.366 --rc geninfo_unexecuted_blocks=1 00:07:14.366 00:07:14.366 ' 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.366 --rc genhtml_branch_coverage=1 00:07:14.366 --rc genhtml_function_coverage=1 00:07:14.366 --rc genhtml_legend=1 00:07:14.366 --rc geninfo_all_blocks=1 00:07:14.366 --rc geninfo_unexecuted_blocks=1 00:07:14.366 00:07:14.366 ' 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.366 --rc genhtml_branch_coverage=1 00:07:14.366 --rc genhtml_function_coverage=1 00:07:14.366 --rc genhtml_legend=1 00:07:14.366 --rc geninfo_all_blocks=1 00:07:14.366 --rc geninfo_unexecuted_blocks=1 00:07:14.366 00:07:14.366 ' 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:14.366 ************************************ 00:07:14.366 START TEST dd_invalid_arguments 00:07:14.366 ************************************ 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.366 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:14.366 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:14.366 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:14.366 00:07:14.366 CPU options: 00:07:14.366 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:14.366 (like [0,1,10]) 00:07:14.366 --lcores lcore to CPU mapping list. The list is in the format: 00:07:14.366 [<,lcores[@CPUs]>...] 00:07:14.366 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:14.366 Within the group, '-' is used for range separator, 00:07:14.366 ',' is used for single number separator. 00:07:14.366 '( )' can be omitted for single element group, 00:07:14.366 '@' can be omitted if cpus and lcores have the same value 00:07:14.366 --disable-cpumask-locks Disable CPU core lock files. 00:07:14.366 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:14.366 pollers in the app support interrupt mode) 00:07:14.366 -p, --main-core main (primary) core for DPDK 00:07:14.366 00:07:14.366 Configuration options: 00:07:14.366 -c, --config, --json JSON config file 00:07:14.366 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:14.366 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:14.366 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:14.366 --rpcs-allowed comma-separated list of permitted RPCS 00:07:14.366 --json-ignore-init-errors don't exit on invalid config entry 00:07:14.366 00:07:14.366 Memory options: 00:07:14.366 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:14.366 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:14.366 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:14.366 -R, --huge-unlink unlink huge files after initialization 00:07:14.366 -n, --mem-channels number of memory channels used for DPDK 00:07:14.366 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:14.366 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:14.366 --no-huge run without using hugepages 00:07:14.366 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:14.366 -i, --shm-id shared memory ID (optional) 00:07:14.366 -g, --single-file-segments force creating just one hugetlbfs file 00:07:14.366 00:07:14.366 PCI options: 00:07:14.366 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:14.366 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:14.366 -u, --no-pci disable PCI access 00:07:14.366 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:14.366 00:07:14.366 Log options: 00:07:14.366 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:14.366 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:14.366 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:14.366 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:14.366 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:14.366 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:14.366 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:14.366 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:14.366 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:14.366 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:14.366 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:14.366 --silence-noticelog disable notice level logging to stderr 00:07:14.366 00:07:14.366 Trace options: 00:07:14.366 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:14.366 setting 0 to disable trace (default 32768) 00:07:14.366 Tracepoints vary in size and can use more than one trace entry. 00:07:14.366 -e, --tpoint-group [:] 00:07:14.367 [2024-12-10 21:34:15.074373] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:07:14.367 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:14.367 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:14.367 bdev_raid, scheduler, all). 00:07:14.367 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:14.367 a tracepoint group. First tpoint inside a group can be enabled by 00:07:14.367 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:14.367 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:14.367 in /include/spdk_internal/trace_defs.h 00:07:14.367 00:07:14.367 Other options: 00:07:14.367 -h, --help show this usage 00:07:14.367 -v, --version print SPDK version 00:07:14.367 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:14.367 --env-context Opaque context for use of the env implementation 00:07:14.367 00:07:14.367 Application specific: 00:07:14.367 [--------- DD Options ---------] 00:07:14.367 --if Input file. Must specify either --if or --ib. 00:07:14.367 --ib Input bdev. Must specifier either --if or --ib 00:07:14.367 --of Output file. Must specify either --of or --ob. 00:07:14.367 --ob Output bdev. Must specify either --of or --ob. 00:07:14.367 --iflag Input file flags. 00:07:14.367 --oflag Output file flags. 00:07:14.367 --bs I/O unit size (default: 4096) 00:07:14.367 --qd Queue depth (default: 2) 00:07:14.367 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:14.367 --skip Skip this many I/O units at start of input. (default: 0) 00:07:14.367 --seek Skip this many I/O units at start of output. (default: 0) 00:07:14.367 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:14.367 --sparse Enable hole skipping in input target 00:07:14.367 Available iflag and oflag values: 00:07:14.367 append - append mode 00:07:14.367 direct - use direct I/O for data 00:07:14.367 directory - fail unless a directory 00:07:14.367 dsync - use synchronized I/O for data 00:07:14.367 noatime - do not update access time 00:07:14.367 noctty - do not assign controlling terminal from file 00:07:14.367 nofollow - do not follow symlinks 00:07:14.367 nonblock - use non-blocking I/O 00:07:14.367 sync - use synchronized I/O for data and metadata 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.367 00:07:14.367 real 0m0.065s 00:07:14.367 user 0m0.039s 00:07:14.367 sys 0m0.025s 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:14.367 ************************************ 00:07:14.367 END TEST dd_invalid_arguments 00:07:14.367 ************************************ 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:14.367 ************************************ 00:07:14.367 START TEST dd_double_input 00:07:14.367 ************************************ 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.367 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:14.625 [2024-12-10 21:34:15.186824] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.625 00:07:14.625 real 0m0.069s 00:07:14.625 user 0m0.042s 00:07:14.625 sys 0m0.025s 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:14.625 ************************************ 00:07:14.625 END TEST dd_double_input 00:07:14.625 ************************************ 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.625 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:14.625 ************************************ 00:07:14.625 START TEST dd_double_output 00:07:14.626 ************************************ 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:14.626 [2024-12-10 21:34:15.298918] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:14.626 ************************************ 00:07:14.626 END TEST dd_double_output 00:07:14.626 ************************************ 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.626 00:07:14.626 real 0m0.067s 00:07:14.626 user 0m0.038s 00:07:14.626 sys 0m0.027s 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:14.626 ************************************ 00:07:14.626 START TEST dd_no_input 00:07:14.626 ************************************ 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.626 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:14.884 [2024-12-10 21:34:15.421575] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:07:14.884 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:14.884 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.884 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.884 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.884 00:07:14.884 real 0m0.077s 00:07:14.884 user 0m0.049s 00:07:14.884 sys 0m0.027s 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:14.885 ************************************ 00:07:14.885 END TEST dd_no_input 00:07:14.885 ************************************ 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:14.885 ************************************ 00:07:14.885 START TEST dd_no_output 00:07:14.885 ************************************ 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.885 [2024-12-10 21:34:15.549223] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:07:14.885 ************************************ 00:07:14.885 END TEST dd_no_output 00:07:14.885 ************************************ 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.885 00:07:14.885 real 0m0.096s 00:07:14.885 user 0m0.061s 00:07:14.885 sys 0m0.033s 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:14.885 ************************************ 00:07:14.885 START TEST dd_wrong_blocksize 00:07:14.885 ************************************ 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.885 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:15.143 [2024-12-10 21:34:15.690827] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:15.143 00:07:15.143 real 0m0.095s 00:07:15.143 user 0m0.068s 00:07:15.143 sys 0m0.025s 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:15.143 ************************************ 00:07:15.143 END TEST dd_wrong_blocksize 00:07:15.143 ************************************ 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:15.143 ************************************ 00:07:15.143 START TEST dd_smaller_blocksize 00:07:15.143 ************************************ 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:15.143 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.144 21:34:15 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:15.144 [2024-12-10 21:34:15.834150] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:15.144 [2024-12-10 21:34:15.834294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61801 ] 00:07:15.402 [2024-12-10 21:34:15.984894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.402 [2024-12-10 21:34:16.033846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.402 [2024-12-10 21:34:16.070278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.658 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:15.916 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:15.916 [2024-12-10 21:34:16.601544] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:15.916 [2024-12-10 21:34:16.601635] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.916 [2024-12-10 21:34:16.669637] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:16.174 ************************************ 00:07:16.174 END TEST dd_smaller_blocksize 00:07:16.174 ************************************ 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.174 00:07:16.174 real 0m0.968s 00:07:16.174 user 0m0.361s 00:07:16.174 sys 0m0.497s 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:16.174 ************************************ 00:07:16.174 START TEST dd_invalid_count 00:07:16.174 ************************************ 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:16.174 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:16.175 [2024-12-10 21:34:16.838476] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.175 00:07:16.175 real 0m0.096s 00:07:16.175 user 0m0.066s 00:07:16.175 sys 0m0.028s 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:16.175 ************************************ 00:07:16.175 END TEST dd_invalid_count 00:07:16.175 ************************************ 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:16.175 ************************************ 00:07:16.175 START TEST dd_invalid_oflag 00:07:16.175 ************************************ 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.175 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:16.447 [2024-12-10 21:34:16.977059] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:07:16.447 21:34:16 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.447 00:07:16.447 real 0m0.091s 00:07:16.447 user 0m0.056s 00:07:16.447 sys 0m0.033s 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.447 ************************************ 00:07:16.447 END TEST dd_invalid_oflag 00:07:16.447 ************************************ 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 ************************************ 00:07:16.447 START TEST dd_invalid_iflag 00:07:16.447 ************************************ 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:16.447 [2024-12-10 21:34:17.111497] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.447 00:07:16.447 real 0m0.092s 00:07:16.447 user 0m0.062s 00:07:16.447 sys 0m0.029s 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 ************************************ 00:07:16.447 END TEST dd_invalid_iflag 00:07:16.447 ************************************ 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 ************************************ 00:07:16.447 START TEST dd_unknown_flag 00:07:16.447 ************************************ 00:07:16.447 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.448 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:16.717 [2024-12-10 21:34:17.229864] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:16.717 [2024-12-10 21:34:17.230171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61899 ] 00:07:16.717 [2024-12-10 21:34:17.371594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.717 [2024-12-10 21:34:17.419529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.717 [2024-12-10 21:34:17.454702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.717 [2024-12-10 21:34:17.479933] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:16.717 [2024-12-10 21:34:17.480015] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.717 [2024-12-10 21:34:17.480095] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:07:16.717 [2024-12-10 21:34:17.480114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.717 [2024-12-10 21:34:17.480426] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:16.717 [2024-12-10 21:34:17.480469] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.717 [2024-12-10 21:34:17.480538] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:16.717 [2024-12-10 21:34:17.480556] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:16.976 [2024-12-10 21:34:17.559280] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.976 00:07:16.976 real 0m0.444s 00:07:16.976 user 0m0.235s 00:07:16.976 sys 0m0.110s 00:07:16.976 ************************************ 00:07:16.976 END TEST dd_unknown_flag 00:07:16.976 ************************************ 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:16.976 ************************************ 00:07:16.976 START TEST dd_invalid_json 00:07:16.976 ************************************ 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.976 21:34:17 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:16.976 [2024-12-10 21:34:17.732284] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:16.976 [2024-12-10 21:34:17.732424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61927 ] 00:07:17.234 [2024-12-10 21:34:17.883701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.234 [2024-12-10 21:34:17.934366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.234 [2024-12-10 21:34:17.934523] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:17.234 [2024-12-10 21:34:17.934553] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:17.234 [2024-12-10 21:34:17.934570] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.234 [2024-12-10 21:34:17.934629] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:17.234 ************************************ 00:07:17.234 END TEST dd_invalid_json 00:07:17.234 ************************************ 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.234 00:07:17.234 real 0m0.349s 00:07:17.234 user 0m0.185s 00:07:17.234 sys 0m0.060s 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.234 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.493 ************************************ 00:07:17.493 START TEST dd_invalid_seek 00:07:17.493 ************************************ 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.493 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.494 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:17.494 [2024-12-10 21:34:18.108482] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:17.494 [2024-12-10 21:34:18.108790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61953 ] 00:07:17.494 { 00:07:17.494 "subsystems": [ 00:07:17.494 { 00:07:17.494 "subsystem": "bdev", 00:07:17.494 "config": [ 00:07:17.494 { 00:07:17.494 "params": { 00:07:17.494 "block_size": 512, 00:07:17.494 "num_blocks": 512, 00:07:17.494 "name": "malloc0" 00:07:17.494 }, 00:07:17.494 "method": "bdev_malloc_create" 00:07:17.494 }, 00:07:17.494 { 00:07:17.494 "params": { 00:07:17.494 "block_size": 512, 00:07:17.494 "num_blocks": 512, 00:07:17.494 "name": "malloc1" 00:07:17.494 }, 00:07:17.494 "method": "bdev_malloc_create" 00:07:17.494 }, 00:07:17.494 { 00:07:17.494 "method": "bdev_wait_for_examine" 00:07:17.494 } 00:07:17.494 ] 00:07:17.494 } 00:07:17.494 ] 00:07:17.494 } 00:07:17.752 [2024-12-10 21:34:18.296223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.752 [2024-12-10 21:34:18.345027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.752 [2024-12-10 21:34:18.387315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.753 [2024-12-10 21:34:18.436052] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:17.753 [2024-12-10 21:34:18.436282] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.753 [2024-12-10 21:34:18.509833] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:18.011 ************************************ 00:07:18.011 END TEST dd_invalid_seek 00:07:18.011 ************************************ 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.011 00:07:18.011 real 0m0.522s 00:07:18.011 user 0m0.366s 00:07:18.011 sys 0m0.115s 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.011 ************************************ 00:07:18.011 START TEST dd_invalid_skip 00:07:18.011 ************************************ 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.011 21:34:18 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:18.011 { 00:07:18.011 "subsystems": [ 00:07:18.011 { 00:07:18.011 "subsystem": "bdev", 00:07:18.011 "config": [ 00:07:18.011 { 00:07:18.011 "params": { 00:07:18.011 "block_size": 512, 00:07:18.011 "num_blocks": 512, 00:07:18.011 "name": "malloc0" 00:07:18.011 }, 00:07:18.011 "method": "bdev_malloc_create" 00:07:18.011 }, 00:07:18.011 { 00:07:18.011 "params": { 00:07:18.011 "block_size": 512, 00:07:18.011 "num_blocks": 512, 00:07:18.011 "name": "malloc1" 00:07:18.011 }, 00:07:18.011 "method": "bdev_malloc_create" 00:07:18.011 }, 00:07:18.011 { 00:07:18.011 "method": "bdev_wait_for_examine" 00:07:18.011 } 00:07:18.011 ] 00:07:18.011 } 00:07:18.011 ] 00:07:18.011 } 00:07:18.011 [2024-12-10 21:34:18.687270] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:18.011 [2024-12-10 21:34:18.687412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61990 ] 00:07:18.270 [2024-12-10 21:34:18.840347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.270 [2024-12-10 21:34:18.873960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.270 [2024-12-10 21:34:18.905182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.270 [2024-12-10 21:34:18.950916] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:18.270 [2024-12-10 21:34:18.950991] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.270 [2024-12-10 21:34:19.019755] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:18.528 ************************************ 00:07:18.528 END TEST dd_invalid_skip 00:07:18.528 ************************************ 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.528 00:07:18.528 real 0m0.467s 00:07:18.528 user 0m0.306s 00:07:18.528 sys 0m0.108s 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.528 ************************************ 00:07:18.528 START TEST dd_invalid_input_count 00:07:18.528 ************************************ 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:18.528 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.529 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:18.529 [2024-12-10 21:34:19.175096] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:18.529 [2024-12-10 21:34:19.175200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62024 ] 00:07:18.529 { 00:07:18.529 "subsystems": [ 00:07:18.529 { 00:07:18.529 "subsystem": "bdev", 00:07:18.529 "config": [ 00:07:18.529 { 00:07:18.529 "params": { 00:07:18.529 "block_size": 512, 00:07:18.529 "num_blocks": 512, 00:07:18.529 "name": "malloc0" 00:07:18.529 }, 00:07:18.529 "method": "bdev_malloc_create" 00:07:18.529 }, 00:07:18.529 { 00:07:18.529 "params": { 00:07:18.529 "block_size": 512, 00:07:18.529 "num_blocks": 512, 00:07:18.529 "name": "malloc1" 00:07:18.529 }, 00:07:18.529 "method": "bdev_malloc_create" 00:07:18.529 }, 00:07:18.529 { 00:07:18.529 "method": "bdev_wait_for_examine" 00:07:18.529 } 00:07:18.529 ] 00:07:18.529 } 00:07:18.529 ] 00:07:18.529 } 00:07:18.787 [2024-12-10 21:34:19.317164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.787 [2024-12-10 21:34:19.363687] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.787 [2024-12-10 21:34:19.400930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.787 [2024-12-10 21:34:19.455116] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:18.787 [2024-12-10 21:34:19.455206] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.787 [2024-12-10 21:34:19.527799] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:19.046 ************************************ 00:07:19.046 END TEST dd_invalid_input_count 00:07:19.046 ************************************ 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.046 00:07:19.046 real 0m0.465s 00:07:19.046 user 0m0.299s 00:07:19.046 sys 0m0.120s 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.046 ************************************ 00:07:19.046 START TEST dd_invalid_output_count 00:07:19.046 ************************************ 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.046 21:34:19 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:19.046 { 00:07:19.046 "subsystems": [ 00:07:19.046 { 00:07:19.046 "subsystem": "bdev", 00:07:19.046 "config": [ 00:07:19.046 { 00:07:19.046 "params": { 00:07:19.046 "block_size": 512, 00:07:19.046 "num_blocks": 512, 00:07:19.046 "name": "malloc0" 00:07:19.046 }, 00:07:19.046 "method": "bdev_malloc_create" 00:07:19.046 }, 00:07:19.046 { 00:07:19.046 "method": "bdev_wait_for_examine" 00:07:19.046 } 00:07:19.046 ] 00:07:19.046 } 00:07:19.046 ] 00:07:19.046 } 00:07:19.047 [2024-12-10 21:34:19.704880] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:19.047 [2024-12-10 21:34:19.705021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62059 ] 00:07:19.305 [2024-12-10 21:34:19.854546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.305 [2024-12-10 21:34:19.903301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.305 [2024-12-10 21:34:19.935856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.305 [2024-12-10 21:34:19.974225] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:19.305 [2024-12-10 21:34:19.974301] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.305 [2024-12-10 21:34:20.043065] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.563 00:07:19.563 real 0m0.472s 00:07:19.563 user 0m0.296s 00:07:19.563 sys 0m0.129s 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:19.563 ************************************ 00:07:19.563 END TEST dd_invalid_output_count 00:07:19.563 ************************************ 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.563 ************************************ 00:07:19.563 START TEST dd_bs_not_multiple 00:07:19.563 ************************************ 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:19.563 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.564 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:19.564 { 00:07:19.564 "subsystems": [ 00:07:19.564 { 00:07:19.564 "subsystem": "bdev", 00:07:19.564 "config": [ 00:07:19.564 { 00:07:19.564 "params": { 00:07:19.564 "block_size": 512, 00:07:19.564 "num_blocks": 512, 00:07:19.564 "name": "malloc0" 00:07:19.564 }, 00:07:19.564 "method": "bdev_malloc_create" 00:07:19.564 }, 00:07:19.564 { 00:07:19.564 "params": { 00:07:19.564 "block_size": 512, 00:07:19.564 "num_blocks": 512, 00:07:19.564 "name": "malloc1" 00:07:19.564 }, 00:07:19.564 "method": "bdev_malloc_create" 00:07:19.564 }, 00:07:19.564 { 00:07:19.564 "method": "bdev_wait_for_examine" 00:07:19.564 } 00:07:19.564 ] 00:07:19.564 } 00:07:19.564 ] 00:07:19.564 } 00:07:19.564 [2024-12-10 21:34:20.221362] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:19.564 [2024-12-10 21:34:20.221525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62089 ] 00:07:19.822 [2024-12-10 21:34:20.368803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.822 [2024-12-10 21:34:20.402995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.822 [2024-12-10 21:34:20.434986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.822 [2024-12-10 21:34:20.487290] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:19.822 [2024-12-10 21:34:20.487389] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.822 [2024-12-10 21:34:20.556186] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.080 00:07:20.080 real 0m0.483s 00:07:20.080 user 0m0.317s 00:07:20.080 sys 0m0.128s 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.080 ************************************ 00:07:20.080 END TEST dd_bs_not_multiple 00:07:20.080 ************************************ 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 00:07:20.080 real 0m5.823s 00:07:20.080 user 0m3.178s 00:07:20.080 sys 0m2.037s 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.080 21:34:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 ************************************ 00:07:20.080 END TEST spdk_dd_negative 00:07:20.080 ************************************ 00:07:20.080 ************************************ 00:07:20.080 END TEST spdk_dd 00:07:20.080 ************************************ 00:07:20.080 00:07:20.080 real 1m11.806s 00:07:20.080 user 0m46.770s 00:07:20.080 sys 0m30.990s 00:07:20.080 21:34:20 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.080 21:34:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 21:34:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:20.080 21:34:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:20.080 21:34:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:20.080 21:34:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.080 21:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 21:34:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:20.080 21:34:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:20.080 21:34:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:20.080 21:34:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:20.080 21:34:20 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:20.080 21:34:20 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:20.080 21:34:20 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:20.080 21:34:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.080 21:34:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.080 21:34:20 -- common/autotest_common.sh@10 -- # set +x 00:07:20.080 ************************************ 00:07:20.080 START TEST nvmf_tcp 00:07:20.080 ************************************ 00:07:20.080 21:34:20 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:20.080 * Looking for test storage... 00:07:20.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:20.080 21:34:20 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.080 21:34:20 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.080 21:34:20 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.338 21:34:20 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.338 21:34:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.338 21:34:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.338 21:34:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.338 21:34:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.338 21:34:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.339 21:34:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.339 --rc genhtml_branch_coverage=1 00:07:20.339 --rc genhtml_function_coverage=1 00:07:20.339 --rc genhtml_legend=1 00:07:20.339 --rc geninfo_all_blocks=1 00:07:20.339 --rc geninfo_unexecuted_blocks=1 00:07:20.339 00:07:20.339 ' 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.339 --rc genhtml_branch_coverage=1 00:07:20.339 --rc genhtml_function_coverage=1 00:07:20.339 --rc genhtml_legend=1 00:07:20.339 --rc geninfo_all_blocks=1 00:07:20.339 --rc geninfo_unexecuted_blocks=1 00:07:20.339 00:07:20.339 ' 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.339 --rc genhtml_branch_coverage=1 00:07:20.339 --rc genhtml_function_coverage=1 00:07:20.339 --rc genhtml_legend=1 00:07:20.339 --rc geninfo_all_blocks=1 00:07:20.339 --rc geninfo_unexecuted_blocks=1 00:07:20.339 00:07:20.339 ' 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.339 --rc genhtml_branch_coverage=1 00:07:20.339 --rc genhtml_function_coverage=1 00:07:20.339 --rc genhtml_legend=1 00:07:20.339 --rc geninfo_all_blocks=1 00:07:20.339 --rc geninfo_unexecuted_blocks=1 00:07:20.339 00:07:20.339 ' 00:07:20.339 21:34:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:20.339 21:34:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:20.339 21:34:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.339 21:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.339 ************************************ 00:07:20.339 START TEST nvmf_target_core 00:07:20.339 ************************************ 00:07:20.339 21:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:20.339 * Looking for test storage... 00:07:20.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:20.339 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.339 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.339 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.598 --rc genhtml_branch_coverage=1 00:07:20.598 --rc genhtml_function_coverage=1 00:07:20.598 --rc genhtml_legend=1 00:07:20.598 --rc geninfo_all_blocks=1 00:07:20.598 --rc geninfo_unexecuted_blocks=1 00:07:20.598 00:07:20.598 ' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.598 --rc genhtml_branch_coverage=1 00:07:20.598 --rc genhtml_function_coverage=1 00:07:20.598 --rc genhtml_legend=1 00:07:20.598 --rc geninfo_all_blocks=1 00:07:20.598 --rc geninfo_unexecuted_blocks=1 00:07:20.598 00:07:20.598 ' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.598 --rc genhtml_branch_coverage=1 00:07:20.598 --rc genhtml_function_coverage=1 00:07:20.598 --rc genhtml_legend=1 00:07:20.598 --rc geninfo_all_blocks=1 00:07:20.598 --rc geninfo_unexecuted_blocks=1 00:07:20.598 00:07:20.598 ' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.598 --rc genhtml_branch_coverage=1 00:07:20.598 --rc genhtml_function_coverage=1 00:07:20.598 --rc genhtml_legend=1 00:07:20.598 --rc geninfo_all_blocks=1 00:07:20.598 --rc geninfo_unexecuted_blocks=1 00:07:20.598 00:07:20.598 ' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.598 21:34:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.599 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.599 ************************************ 00:07:20.599 START TEST nvmf_host_management 00:07:20.599 ************************************ 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:20.599 * Looking for test storage... 00:07:20.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.599 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.858 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.858 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.859 --rc genhtml_branch_coverage=1 00:07:20.859 --rc genhtml_function_coverage=1 00:07:20.859 --rc genhtml_legend=1 00:07:20.859 --rc geninfo_all_blocks=1 00:07:20.859 --rc geninfo_unexecuted_blocks=1 00:07:20.859 00:07:20.859 ' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.859 --rc genhtml_branch_coverage=1 00:07:20.859 --rc genhtml_function_coverage=1 00:07:20.859 --rc genhtml_legend=1 00:07:20.859 --rc geninfo_all_blocks=1 00:07:20.859 --rc geninfo_unexecuted_blocks=1 00:07:20.859 00:07:20.859 ' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.859 --rc genhtml_branch_coverage=1 00:07:20.859 --rc genhtml_function_coverage=1 00:07:20.859 --rc genhtml_legend=1 00:07:20.859 --rc geninfo_all_blocks=1 00:07:20.859 --rc geninfo_unexecuted_blocks=1 00:07:20.859 00:07:20.859 ' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.859 --rc genhtml_branch_coverage=1 00:07:20.859 --rc genhtml_function_coverage=1 00:07:20.859 --rc genhtml_legend=1 00:07:20.859 --rc geninfo_all_blocks=1 00:07:20.859 --rc geninfo_unexecuted_blocks=1 00:07:20.859 00:07:20.859 ' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.859 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.859 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:20.860 Cannot find device "nvmf_init_br" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:20.860 Cannot find device "nvmf_init_br2" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:20.860 Cannot find device "nvmf_tgt_br" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.860 Cannot find device "nvmf_tgt_br2" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:20.860 Cannot find device "nvmf_init_br" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:20.860 Cannot find device "nvmf_init_br2" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:20.860 Cannot find device "nvmf_tgt_br" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:20.860 Cannot find device "nvmf_tgt_br2" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:20.860 Cannot find device "nvmf_br" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:20.860 Cannot find device "nvmf_init_if" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:20.860 Cannot find device "nvmf_init_if2" 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:20.860 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:21.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.167 ms 00:07:21.119 00:07:21.119 --- 10.0.0.3 ping statistics --- 00:07:21.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.119 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:21.119 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:21.377 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:21.377 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:07:21.377 00:07:21.377 --- 10.0.0.4 ping statistics --- 00:07:21.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.377 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:07:21.377 00:07:21.377 --- 10.0.0.1 ping statistics --- 00:07:21.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.377 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:21.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:07:21.377 00:07:21.377 --- 10.0.0.2 ping statistics --- 00:07:21.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.377 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62430 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62430 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62430 ']' 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.377 21:34:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.377 [2024-12-10 21:34:22.022698] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:21.377 [2024-12-10 21:34:22.022828] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.636 [2024-12-10 21:34:22.176237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.636 [2024-12-10 21:34:22.228667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.636 [2024-12-10 21:34:22.228747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.636 [2024-12-10 21:34:22.228764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.636 [2024-12-10 21:34:22.228777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.636 [2024-12-10 21:34:22.228788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.636 [2024-12-10 21:34:22.229866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.636 [2024-12-10 21:34:22.229960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.636 [2024-12-10 21:34:22.230034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.636 [2024-12-10 21:34:22.230040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.636 [2024-12-10 21:34:22.263957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.636 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.636 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:21.636 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.636 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.636 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 [2024-12-10 21:34:22.429972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 Malloc0 00:07:21.895 [2024-12-10 21:34:22.506255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62477 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62477 /var/tmp/bdevperf.sock 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62477 ']' 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:21.895 { 00:07:21.895 "params": { 00:07:21.895 "name": "Nvme$subsystem", 00:07:21.895 "trtype": "$TEST_TRANSPORT", 00:07:21.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:21.895 "adrfam": "ipv4", 00:07:21.895 "trsvcid": "$NVMF_PORT", 00:07:21.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:21.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:21.895 "hdgst": ${hdgst:-false}, 00:07:21.895 "ddgst": ${ddgst:-false} 00:07:21.895 }, 00:07:21.895 "method": "bdev_nvme_attach_controller" 00:07:21.895 } 00:07:21.895 EOF 00:07:21.895 )") 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:21.895 21:34:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:21.895 "params": { 00:07:21.895 "name": "Nvme0", 00:07:21.895 "trtype": "tcp", 00:07:21.895 "traddr": "10.0.0.3", 00:07:21.895 "adrfam": "ipv4", 00:07:21.895 "trsvcid": "4420", 00:07:21.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:21.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:21.895 "hdgst": false, 00:07:21.895 "ddgst": false 00:07:21.895 }, 00:07:21.895 "method": "bdev_nvme_attach_controller" 00:07:21.895 }' 00:07:21.895 [2024-12-10 21:34:22.618241] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:21.895 [2024-12-10 21:34:22.618359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62477 ] 00:07:22.153 [2024-12-10 21:34:22.768519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.153 [2024-12-10 21:34:22.816569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.153 [2024-12-10 21:34:22.859901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.412 Running I/O for 10 seconds... 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:22.412 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.674 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.674 [2024-12-10 21:34:23.419776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.674 [2024-12-10 21:34:23.420790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.674 [2024-12-10 21:34:23.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.420826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.420842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.420862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.420878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.420897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.420913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.420931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.420947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.420965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.420981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.675 [2024-12-10 21:34:23.421839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.675 [2024-12-10 21:34:23.421855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.421877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.421895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.421914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.421932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.421950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.421967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.421986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.676 [2024-12-10 21:34:23.422354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.422370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18efe30 is same with the state(6) to be set 00:07:22.676 [2024-12-10 21:34:23.423116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.676 [2024-12-10 21:34:23.423230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.423346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.676 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.676 [2024-12-10 21:34:23.423529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.423552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.676 [2024-12-10 21:34:23.423566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.423583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:22.676 [2024-12-10 21:34:23.423600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:22.676 [2024-12-10 21:34:23.423614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eb9d0 is same with the state(6) to be set 00:07:22.676 21:34:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:22.676 [2024-12-10 21:34:23.425009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:22.676 task offset: 57344 on job bdev=Nvme0n1 fails 00:07:22.676 00:07:22.676 Latency(us) 00:07:22.676 [2024-12-10T21:34:23.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.676 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:22.676 Job: Nvme0n1 ended in about 0.45 seconds with error 00:07:22.676 Verification LBA range: start 0x0 length 0x400 00:07:22.676 Nvme0n1 : 0.45 994.56 62.16 142.08 0.00 54486.23 4498.15 55765.18 00:07:22.676 [2024-12-10T21:34:23.459Z] =================================================================================================================== 00:07:22.676 [2024-12-10T21:34:23.459Z] Total : 994.56 62.16 142.08 0.00 54486.23 4498.15 55765.18 00:07:22.676 [2024-12-10 21:34:23.427943] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.676 [2024-12-10 21:34:23.428111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb9d0 (9): Bad file descriptor 00:07:22.676 [2024-12-10 21:34:23.438712] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62477 00:07:24.051 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62477) - No such process 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:24.051 { 00:07:24.051 "params": { 00:07:24.051 "name": "Nvme$subsystem", 00:07:24.051 "trtype": "$TEST_TRANSPORT", 00:07:24.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.051 "adrfam": "ipv4", 00:07:24.051 "trsvcid": "$NVMF_PORT", 00:07:24.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.051 "hdgst": ${hdgst:-false}, 00:07:24.051 "ddgst": ${ddgst:-false} 00:07:24.051 }, 00:07:24.051 "method": "bdev_nvme_attach_controller" 00:07:24.051 } 00:07:24.051 EOF 00:07:24.051 )") 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:24.051 21:34:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:24.051 "params": { 00:07:24.051 "name": "Nvme0", 00:07:24.051 "trtype": "tcp", 00:07:24.051 "traddr": "10.0.0.3", 00:07:24.051 "adrfam": "ipv4", 00:07:24.051 "trsvcid": "4420", 00:07:24.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:24.051 "hdgst": false, 00:07:24.051 "ddgst": false 00:07:24.051 }, 00:07:24.051 "method": "bdev_nvme_attach_controller" 00:07:24.051 }' 00:07:24.051 [2024-12-10 21:34:24.481787] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:24.051 [2024-12-10 21:34:24.481886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62517 ] 00:07:24.051 [2024-12-10 21:34:24.682698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.051 [2024-12-10 21:34:24.732277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.051 [2024-12-10 21:34:24.786622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.310 Running I/O for 1 seconds... 00:07:25.242 1216.00 IOPS, 76.00 MiB/s 00:07:25.242 Latency(us) 00:07:25.242 [2024-12-10T21:34:26.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.242 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.242 Verification LBA range: start 0x0 length 0x400 00:07:25.242 Nvme0n1 : 1.03 1239.70 77.48 0.00 0.00 49657.29 6285.50 77689.95 00:07:25.242 [2024-12-10T21:34:26.025Z] =================================================================================================================== 00:07:25.242 [2024-12-10T21:34:26.025Z] Total : 1239.70 77.48 0.00 0.00 49657.29 6285.50 77689.95 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.500 rmmod nvme_tcp 00:07:25.500 rmmod nvme_fabrics 00:07:25.500 rmmod nvme_keyring 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62430 ']' 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62430 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62430 ']' 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62430 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62430 00:07:25.500 killing process with pid 62430 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62430' 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62430 00:07:25.500 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62430 00:07:25.758 [2024-12-10 21:34:26.420688] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:25.758 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:26.017 00:07:26.017 real 0m5.466s 00:07:26.017 user 0m19.281s 00:07:26.017 sys 0m1.475s 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.017 ************************************ 00:07:26.017 END TEST nvmf_host_management 00:07:26.017 ************************************ 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.017 ************************************ 00:07:26.017 START TEST nvmf_lvol 00:07:26.017 ************************************ 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.017 * Looking for test storage... 00:07:26.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:26.017 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:26.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.276 --rc genhtml_branch_coverage=1 00:07:26.276 --rc genhtml_function_coverage=1 00:07:26.276 --rc genhtml_legend=1 00:07:26.276 --rc geninfo_all_blocks=1 00:07:26.276 --rc geninfo_unexecuted_blocks=1 00:07:26.276 00:07:26.276 ' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:26.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.276 --rc genhtml_branch_coverage=1 00:07:26.276 --rc genhtml_function_coverage=1 00:07:26.276 --rc genhtml_legend=1 00:07:26.276 --rc geninfo_all_blocks=1 00:07:26.276 --rc geninfo_unexecuted_blocks=1 00:07:26.276 00:07:26.276 ' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:26.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.276 --rc genhtml_branch_coverage=1 00:07:26.276 --rc genhtml_function_coverage=1 00:07:26.276 --rc genhtml_legend=1 00:07:26.276 --rc geninfo_all_blocks=1 00:07:26.276 --rc geninfo_unexecuted_blocks=1 00:07:26.276 00:07:26.276 ' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:26.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.276 --rc genhtml_branch_coverage=1 00:07:26.276 --rc genhtml_function_coverage=1 00:07:26.276 --rc genhtml_legend=1 00:07:26.276 --rc geninfo_all_blocks=1 00:07:26.276 --rc geninfo_unexecuted_blocks=1 00:07:26.276 00:07:26.276 ' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.276 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:26.277 Cannot find device "nvmf_init_br" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:26.277 Cannot find device "nvmf_init_br2" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:26.277 Cannot find device "nvmf_tgt_br" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.277 Cannot find device "nvmf_tgt_br2" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:26.277 Cannot find device "nvmf_init_br" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:26.277 Cannot find device "nvmf_init_br2" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:26.277 Cannot find device "nvmf_tgt_br" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:26.277 Cannot find device "nvmf_tgt_br2" 00:07:26.277 21:34:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:26.277 Cannot find device "nvmf_br" 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:26.277 Cannot find device "nvmf_init_if" 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:26.277 Cannot find device "nvmf_init_if2" 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.277 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:26.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:07:26.536 00:07:26.536 --- 10.0.0.3 ping statistics --- 00:07:26.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.536 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:26.536 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:26.536 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:07:26.536 00:07:26.536 --- 10.0.0.4 ping statistics --- 00:07:26.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.536 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:07:26.536 00:07:26.536 --- 10.0.0.1 ping statistics --- 00:07:26.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.536 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:26.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:07:26.536 00:07:26.536 --- 10.0.0.2 ping statistics --- 00:07:26.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.536 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62791 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62791 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62791 ']' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.536 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.795 [2024-12-10 21:34:27.379902] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:26.795 [2024-12-10 21:34:27.380046] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.795 [2024-12-10 21:34:27.534280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.053 [2024-12-10 21:34:27.583657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.053 [2024-12-10 21:34:27.583740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.053 [2024-12-10 21:34:27.583764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.053 [2024-12-10 21:34:27.583779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.053 [2024-12-10 21:34:27.583792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.053 [2024-12-10 21:34:27.584801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.053 [2024-12-10 21:34:27.584877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.053 [2024-12-10 21:34:27.584868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.053 [2024-12-10 21:34:27.621750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.053 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.053 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:27.053 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.053 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.053 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.053 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.053 21:34:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:27.311 [2024-12-10 21:34:27.992987] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.311 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:27.946 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:27.946 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:28.223 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:28.223 21:34:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:28.791 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:29.050 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e329d5ea-b25c-424f-a7e1-33501f80229f 00:07:29.050 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e329d5ea-b25c-424f-a7e1-33501f80229f lvol 20 00:07:29.307 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ee3ac7f2-fcad-4ee7-97ec-295ada8d7727 00:07:29.307 21:34:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.566 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ee3ac7f2-fcad-4ee7-97ec-295ada8d7727 00:07:29.824 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:30.082 [2024-12-10 21:34:30.798272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:30.082 21:34:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:30.340 21:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62859 00:07:30.340 21:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:30.340 21:34:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:31.714 21:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ee3ac7f2-fcad-4ee7-97ec-295ada8d7727 MY_SNAPSHOT 00:07:31.972 21:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4931e435-a4cf-446e-9a20-e4f9b51e3c56 00:07:31.972 21:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ee3ac7f2-fcad-4ee7-97ec-295ada8d7727 30 00:07:32.231 21:34:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4931e435-a4cf-446e-9a20-e4f9b51e3c56 MY_CLONE 00:07:32.489 21:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a6b429e7-c763-43e7-8c22-ddb5e12e0615 00:07:32.489 21:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a6b429e7-c763-43e7-8c22-ddb5e12e0615 00:07:33.055 21:34:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62859 00:07:41.175 Initializing NVMe Controllers 00:07:41.175 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:41.175 Controller IO queue size 128, less than required. 00:07:41.175 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.175 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:41.175 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:41.175 Initialization complete. Launching workers. 00:07:41.175 ======================================================== 00:07:41.175 Latency(us) 00:07:41.175 Device Information : IOPS MiB/s Average min max 00:07:41.175 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8873.04 34.66 14428.27 254.04 89363.03 00:07:41.175 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9698.31 37.88 13194.34 3652.97 46304.46 00:07:41.175 ======================================================== 00:07:41.175 Total : 18571.35 72.54 13783.89 254.04 89363.03 00:07:41.175 00:07:41.175 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:41.175 21:34:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ee3ac7f2-fcad-4ee7-97ec-295ada8d7727 00:07:41.742 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e329d5ea-b25c-424f-a7e1-33501f80229f 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.000 rmmod nvme_tcp 00:07:42.000 rmmod nvme_fabrics 00:07:42.000 rmmod nvme_keyring 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62791 ']' 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62791 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62791 ']' 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62791 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.000 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62791 00:07:42.258 killing process with pid 62791 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62791' 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62791 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62791 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:42.258 21:34:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:42.258 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:42.258 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:42.516 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.516 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:42.516 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:42.517 00:07:42.517 real 0m16.539s 00:07:42.517 user 1m7.507s 00:07:42.517 sys 0m4.791s 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.517 ************************************ 00:07:42.517 END TEST nvmf_lvol 00:07:42.517 ************************************ 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.517 ************************************ 00:07:42.517 START TEST nvmf_lvs_grow 00:07:42.517 ************************************ 00:07:42.517 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:42.776 * Looking for test storage... 00:07:42.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:42.776 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.777 --rc genhtml_branch_coverage=1 00:07:42.777 --rc genhtml_function_coverage=1 00:07:42.777 --rc genhtml_legend=1 00:07:42.777 --rc geninfo_all_blocks=1 00:07:42.777 --rc geninfo_unexecuted_blocks=1 00:07:42.777 00:07:42.777 ' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.777 --rc genhtml_branch_coverage=1 00:07:42.777 --rc genhtml_function_coverage=1 00:07:42.777 --rc genhtml_legend=1 00:07:42.777 --rc geninfo_all_blocks=1 00:07:42.777 --rc geninfo_unexecuted_blocks=1 00:07:42.777 00:07:42.777 ' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.777 --rc genhtml_branch_coverage=1 00:07:42.777 --rc genhtml_function_coverage=1 00:07:42.777 --rc genhtml_legend=1 00:07:42.777 --rc geninfo_all_blocks=1 00:07:42.777 --rc geninfo_unexecuted_blocks=1 00:07:42.777 00:07:42.777 ' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.777 --rc genhtml_branch_coverage=1 00:07:42.777 --rc genhtml_function_coverage=1 00:07:42.777 --rc genhtml_legend=1 00:07:42.777 --rc geninfo_all_blocks=1 00:07:42.777 --rc geninfo_unexecuted_blocks=1 00:07:42.777 00:07:42.777 ' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:42.777 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:42.778 Cannot find device "nvmf_init_br" 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:42.778 Cannot find device "nvmf_init_br2" 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:42.778 Cannot find device "nvmf_tgt_br" 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.778 Cannot find device "nvmf_tgt_br2" 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:42.778 Cannot find device "nvmf_init_br" 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:42.778 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:43.037 Cannot find device "nvmf_init_br2" 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:43.037 Cannot find device "nvmf_tgt_br" 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:43.037 Cannot find device "nvmf_tgt_br2" 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:43.037 Cannot find device "nvmf_br" 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:43.037 Cannot find device "nvmf_init_if" 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:43.037 Cannot find device "nvmf_init_if2" 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:43.037 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:43.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:43.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.135 ms 00:07:43.296 00:07:43.296 --- 10.0.0.3 ping statistics --- 00:07:43.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.296 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:43.296 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:43.296 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:07:43.296 00:07:43.296 --- 10.0.0.4 ping statistics --- 00:07:43.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.296 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:43.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:43.296 00:07:43.296 --- 10.0.0.1 ping statistics --- 00:07:43.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.296 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:43.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:07:43.296 00:07:43.296 --- 10.0.0.2 ping statistics --- 00:07:43.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.296 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:43.296 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63243 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63243 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63243 ']' 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.297 21:34:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.297 [2024-12-10 21:34:43.968016] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:43.297 [2024-12-10 21:34:43.968107] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.555 [2024-12-10 21:34:44.114279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.555 [2024-12-10 21:34:44.155065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.555 [2024-12-10 21:34:44.155143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.555 [2024-12-10 21:34:44.155160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.555 [2024-12-10 21:34:44.155173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.555 [2024-12-10 21:34:44.155183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.555 [2024-12-10 21:34:44.155610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.555 [2024-12-10 21:34:44.189383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.555 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.555 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:43.555 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.555 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:43.555 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.555 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.555 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:43.813 [2024-12-10 21:34:44.563960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.813 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:43.813 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.814 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.814 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.072 ************************************ 00:07:44.072 START TEST lvs_grow_clean 00:07:44.072 ************************************ 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:44.072 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:44.330 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:44.330 21:34:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:44.587 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=38353132-73a0-41f2-b0b9-02f326e5207f 00:07:44.587 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:07:44.587 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:44.845 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:44.845 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:44.845 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 38353132-73a0-41f2-b0b9-02f326e5207f lvol 150 00:07:45.411 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c0f3833d-6e5c-4b5c-a077-c5b19f173207 00:07:45.411 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.411 21:34:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:45.669 [2024-12-10 21:34:46.253498] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:45.669 [2024-12-10 21:34:46.253589] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:45.669 true 00:07:45.669 21:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:07:45.669 21:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:45.927 21:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:45.927 21:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.185 21:34:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c0f3833d-6e5c-4b5c-a077-c5b19f173207 00:07:46.751 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:47.009 [2024-12-10 21:34:47.590556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:47.009 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63329 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63329 /var/tmp/bdevperf.sock 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63329 ']' 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.267 21:34:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.267 [2024-12-10 21:34:47.993719] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:07:47.267 [2024-12-10 21:34:47.993813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63329 ] 00:07:47.525 [2024-12-10 21:34:48.136939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.525 [2024-12-10 21:34:48.175095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.525 [2024-12-10 21:34:48.209361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.784 21:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.784 21:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:47.784 21:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:48.042 Nvme0n1 00:07:48.042 21:34:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:48.608 [ 00:07:48.608 { 00:07:48.608 "name": "Nvme0n1", 00:07:48.608 "aliases": [ 00:07:48.608 "c0f3833d-6e5c-4b5c-a077-c5b19f173207" 00:07:48.608 ], 00:07:48.608 "product_name": "NVMe disk", 00:07:48.608 "block_size": 4096, 00:07:48.608 "num_blocks": 38912, 00:07:48.608 "uuid": "c0f3833d-6e5c-4b5c-a077-c5b19f173207", 00:07:48.608 "numa_id": -1, 00:07:48.608 "assigned_rate_limits": { 00:07:48.608 "rw_ios_per_sec": 0, 00:07:48.608 "rw_mbytes_per_sec": 0, 00:07:48.608 "r_mbytes_per_sec": 0, 00:07:48.608 "w_mbytes_per_sec": 0 00:07:48.608 }, 00:07:48.608 "claimed": false, 00:07:48.608 "zoned": false, 00:07:48.608 "supported_io_types": { 00:07:48.608 "read": true, 00:07:48.608 "write": true, 00:07:48.608 "unmap": true, 00:07:48.608 "flush": true, 00:07:48.608 "reset": true, 00:07:48.608 "nvme_admin": true, 00:07:48.608 "nvme_io": true, 00:07:48.608 "nvme_io_md": false, 00:07:48.608 "write_zeroes": true, 00:07:48.608 "zcopy": false, 00:07:48.608 "get_zone_info": false, 00:07:48.608 "zone_management": false, 00:07:48.608 "zone_append": false, 00:07:48.608 "compare": true, 00:07:48.608 "compare_and_write": true, 00:07:48.608 "abort": true, 00:07:48.608 "seek_hole": false, 00:07:48.608 "seek_data": false, 00:07:48.608 "copy": true, 00:07:48.608 "nvme_iov_md": false 00:07:48.608 }, 00:07:48.608 "memory_domains": [ 00:07:48.608 { 00:07:48.608 "dma_device_id": "system", 00:07:48.608 "dma_device_type": 1 00:07:48.608 } 00:07:48.608 ], 00:07:48.608 "driver_specific": { 00:07:48.608 "nvme": [ 00:07:48.608 { 00:07:48.608 "trid": { 00:07:48.608 "trtype": "TCP", 00:07:48.608 "adrfam": "IPv4", 00:07:48.608 "traddr": "10.0.0.3", 00:07:48.608 "trsvcid": "4420", 00:07:48.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:48.608 }, 00:07:48.608 "ctrlr_data": { 00:07:48.608 "cntlid": 1, 00:07:48.608 "vendor_id": "0x8086", 00:07:48.608 "model_number": "SPDK bdev Controller", 00:07:48.608 "serial_number": "SPDK0", 00:07:48.608 "firmware_revision": "25.01", 00:07:48.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.608 "oacs": { 00:07:48.608 "security": 0, 00:07:48.608 "format": 0, 00:07:48.608 "firmware": 0, 00:07:48.608 "ns_manage": 0 00:07:48.608 }, 00:07:48.608 "multi_ctrlr": true, 00:07:48.608 "ana_reporting": false 00:07:48.608 }, 00:07:48.608 "vs": { 00:07:48.608 "nvme_version": "1.3" 00:07:48.608 }, 00:07:48.608 "ns_data": { 00:07:48.609 "id": 1, 00:07:48.609 "can_share": true 00:07:48.609 } 00:07:48.609 } 00:07:48.609 ], 00:07:48.609 "mp_policy": "active_passive" 00:07:48.609 } 00:07:48.609 } 00:07:48.609 ] 00:07:48.609 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63345 00:07:48.609 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:48.609 21:34:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.609 Running I/O for 10 seconds... 00:07:49.543 Latency(us) 00:07:49.543 [2024-12-10T21:34:50.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.543 Nvme0n1 : 1.00 6342.00 24.77 0.00 0.00 0.00 0.00 0.00 00:07:49.543 [2024-12-10T21:34:50.326Z] =================================================================================================================== 00:07:49.543 [2024-12-10T21:34:50.326Z] Total : 6342.00 24.77 0.00 0.00 0.00 0.00 0.00 00:07:49.543 00:07:50.492 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:07:50.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.492 Nvme0n1 : 2.00 6409.50 25.04 0.00 0.00 0.00 0.00 0.00 00:07:50.492 [2024-12-10T21:34:51.275Z] =================================================================================================================== 00:07:50.492 [2024-12-10T21:34:51.275Z] Total : 6409.50 25.04 0.00 0.00 0.00 0.00 0.00 00:07:50.492 00:07:50.750 true 00:07:50.750 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:07:50.750 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:51.009 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:51.009 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:51.009 21:34:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63345 00:07:51.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.575 Nvme0n1 : 3.00 6516.67 25.46 0.00 0.00 0.00 0.00 0.00 00:07:51.575 [2024-12-10T21:34:52.358Z] =================================================================================================================== 00:07:51.575 [2024-12-10T21:34:52.358Z] Total : 6516.67 25.46 0.00 0.00 0.00 0.00 0.00 00:07:51.575 00:07:52.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.509 Nvme0n1 : 4.00 6506.75 25.42 0.00 0.00 0.00 0.00 0.00 00:07:52.509 [2024-12-10T21:34:53.292Z] =================================================================================================================== 00:07:52.509 [2024-12-10T21:34:53.292Z] Total : 6506.75 25.42 0.00 0.00 0.00 0.00 0.00 00:07:52.509 00:07:53.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.493 Nvme0n1 : 5.00 6323.00 24.70 0.00 0.00 0.00 0.00 0.00 00:07:53.493 [2024-12-10T21:34:54.276Z] =================================================================================================================== 00:07:53.493 [2024-12-10T21:34:54.276Z] Total : 6323.00 24.70 0.00 0.00 0.00 0.00 0.00 00:07:53.493 00:07:54.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.871 Nvme0n1 : 6.00 6268.00 24.48 0.00 0.00 0.00 0.00 0.00 00:07:54.872 [2024-12-10T21:34:55.655Z] =================================================================================================================== 00:07:54.872 [2024-12-10T21:34:55.655Z] Total : 6268.00 24.48 0.00 0.00 0.00 0.00 0.00 00:07:54.872 00:07:55.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.804 Nvme0n1 : 7.00 6183.43 24.15 0.00 0.00 0.00 0.00 0.00 00:07:55.804 [2024-12-10T21:34:56.587Z] =================================================================================================================== 00:07:55.804 [2024-12-10T21:34:56.587Z] Total : 6183.43 24.15 0.00 0.00 0.00 0.00 0.00 00:07:55.804 00:07:56.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.739 Nvme0n1 : 8.00 6188.38 24.17 0.00 0.00 0.00 0.00 0.00 00:07:56.739 [2024-12-10T21:34:57.522Z] =================================================================================================================== 00:07:56.739 [2024-12-10T21:34:57.522Z] Total : 6188.38 24.17 0.00 0.00 0.00 0.00 0.00 00:07:56.739 00:07:57.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.675 Nvme0n1 : 9.00 6121.67 23.91 0.00 0.00 0.00 0.00 0.00 00:07:57.675 [2024-12-10T21:34:58.458Z] =================================================================================================================== 00:07:57.675 [2024-12-10T21:34:58.458Z] Total : 6121.67 23.91 0.00 0.00 0.00 0.00 0.00 00:07:57.675 00:07:58.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.611 Nvme0n1 : 10.00 5954.00 23.26 0.00 0.00 0.00 0.00 0.00 00:07:58.611 [2024-12-10T21:34:59.394Z] =================================================================================================================== 00:07:58.611 [2024-12-10T21:34:59.394Z] Total : 5954.00 23.26 0.00 0.00 0.00 0.00 0.00 00:07:58.611 00:07:58.611 00:07:58.611 Latency(us) 00:07:58.611 [2024-12-10T21:34:59.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.611 Nvme0n1 : 10.01 5962.08 23.29 0.00 0.00 21458.19 7804.74 141081.13 00:07:58.611 [2024-12-10T21:34:59.394Z] =================================================================================================================== 00:07:58.611 [2024-12-10T21:34:59.394Z] Total : 5962.08 23.29 0.00 0.00 21458.19 7804.74 141081.13 00:07:58.611 { 00:07:58.611 "results": [ 00:07:58.611 { 00:07:58.611 "job": "Nvme0n1", 00:07:58.611 "core_mask": "0x2", 00:07:58.611 "workload": "randwrite", 00:07:58.611 "status": "finished", 00:07:58.611 "queue_depth": 128, 00:07:58.611 "io_size": 4096, 00:07:58.611 "runtime": 10.007909, 00:07:58.611 "iops": 5962.084587299904, 00:07:58.611 "mibps": 23.28939291914025, 00:07:58.611 "io_failed": 0, 00:07:58.611 "io_timeout": 0, 00:07:58.611 "avg_latency_us": 21458.185318763826, 00:07:58.611 "min_latency_us": 7804.741818181818, 00:07:58.611 "max_latency_us": 141081.13454545455 00:07:58.611 } 00:07:58.611 ], 00:07:58.611 "core_count": 1 00:07:58.611 } 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63329 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63329 ']' 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63329 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63329 00:07:58.611 killing process with pid 63329 00:07:58.611 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.611 00:07:58.611 Latency(us) 00:07:58.611 [2024-12-10T21:34:59.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.611 [2024-12-10T21:34:59.394Z] =================================================================================================================== 00:07:58.611 [2024-12-10T21:34:59.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63329' 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63329 00:07:58.611 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63329 00:07:58.870 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:59.128 21:34:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.696 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:07:59.696 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:59.954 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:59.954 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:59.954 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:00.234 [2024-12-10 21:35:00.931267] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.234 21:35:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:08:00.801 request: 00:08:00.801 { 00:08:00.801 "uuid": "38353132-73a0-41f2-b0b9-02f326e5207f", 00:08:00.801 "method": "bdev_lvol_get_lvstores", 00:08:00.801 "req_id": 1 00:08:00.801 } 00:08:00.801 Got JSON-RPC error response 00:08:00.801 response: 00:08:00.801 { 00:08:00.801 "code": -19, 00:08:00.801 "message": "No such device" 00:08:00.801 } 00:08:00.801 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:00.801 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.801 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:00.801 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.801 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.143 aio_bdev 00:08:01.143 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c0f3833d-6e5c-4b5c-a077-c5b19f173207 00:08:01.143 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c0f3833d-6e5c-4b5c-a077-c5b19f173207 00:08:01.143 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.143 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:01.143 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.143 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.143 21:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.709 21:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c0f3833d-6e5c-4b5c-a077-c5b19f173207 -t 2000 00:08:01.967 [ 00:08:01.967 { 00:08:01.967 "name": "c0f3833d-6e5c-4b5c-a077-c5b19f173207", 00:08:01.967 "aliases": [ 00:08:01.967 "lvs/lvol" 00:08:01.967 ], 00:08:01.967 "product_name": "Logical Volume", 00:08:01.967 "block_size": 4096, 00:08:01.967 "num_blocks": 38912, 00:08:01.967 "uuid": "c0f3833d-6e5c-4b5c-a077-c5b19f173207", 00:08:01.967 "assigned_rate_limits": { 00:08:01.967 "rw_ios_per_sec": 0, 00:08:01.967 "rw_mbytes_per_sec": 0, 00:08:01.967 "r_mbytes_per_sec": 0, 00:08:01.967 "w_mbytes_per_sec": 0 00:08:01.967 }, 00:08:01.967 "claimed": false, 00:08:01.967 "zoned": false, 00:08:01.967 "supported_io_types": { 00:08:01.967 "read": true, 00:08:01.967 "write": true, 00:08:01.967 "unmap": true, 00:08:01.967 "flush": false, 00:08:01.967 "reset": true, 00:08:01.967 "nvme_admin": false, 00:08:01.967 "nvme_io": false, 00:08:01.967 "nvme_io_md": false, 00:08:01.967 "write_zeroes": true, 00:08:01.967 "zcopy": false, 00:08:01.967 "get_zone_info": false, 00:08:01.967 "zone_management": false, 00:08:01.967 "zone_append": false, 00:08:01.967 "compare": false, 00:08:01.967 "compare_and_write": false, 00:08:01.967 "abort": false, 00:08:01.967 "seek_hole": true, 00:08:01.967 "seek_data": true, 00:08:01.967 "copy": false, 00:08:01.967 "nvme_iov_md": false 00:08:01.967 }, 00:08:01.967 "driver_specific": { 00:08:01.967 "lvol": { 00:08:01.967 "lvol_store_uuid": "38353132-73a0-41f2-b0b9-02f326e5207f", 00:08:01.967 "base_bdev": "aio_bdev", 00:08:01.967 "thin_provision": false, 00:08:01.967 "num_allocated_clusters": 38, 00:08:01.967 "snapshot": false, 00:08:01.967 "clone": false, 00:08:01.967 "esnap_clone": false 00:08:01.967 } 00:08:01.967 } 00:08:01.967 } 00:08:01.967 ] 00:08:01.967 21:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:01.967 21:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:08:01.967 21:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.532 21:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:02.532 21:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:02.532 21:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:08:02.790 21:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:02.790 21:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c0f3833d-6e5c-4b5c-a077-c5b19f173207 00:08:03.048 21:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 38353132-73a0-41f2-b0b9-02f326e5207f 00:08:03.306 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.564 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.130 ************************************ 00:08:04.130 END TEST lvs_grow_clean 00:08:04.130 ************************************ 00:08:04.130 00:08:04.130 real 0m20.134s 00:08:04.130 user 0m18.903s 00:08:04.130 sys 0m2.799s 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.130 ************************************ 00:08:04.130 START TEST lvs_grow_dirty 00:08:04.130 ************************************ 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:04.130 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:04.131 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:04.131 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:04.131 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.131 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.131 21:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.388 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:04.389 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:04.647 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:04.647 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:04.647 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:05.213 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:05.213 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:05.213 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 lvol 150 00:08:05.213 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7fc05948-fba2-4894-ac85-5aefa0bd3e02 00:08:05.213 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.213 21:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:05.472 [2024-12-10 21:35:06.219347] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:05.472 [2024-12-10 21:35:06.219439] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:05.472 true 00:08:05.472 21:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:05.472 21:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:06.038 21:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:06.038 21:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.296 21:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7fc05948-fba2-4894-ac85-5aefa0bd3e02 00:08:06.555 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:06.813 [2024-12-10 21:35:07.496038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:06.813 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:07.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63615 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63615 /var/tmp/bdevperf.sock 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63615 ']' 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.072 21:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.331 [2024-12-10 21:35:07.865729] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:07.331 [2024-12-10 21:35:07.865866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63615 ] 00:08:07.331 [2024-12-10 21:35:08.018666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.331 [2024-12-10 21:35:08.052365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.331 [2024-12-10 21:35:08.083315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.589 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.589 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:07.589 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:07.846 Nvme0n1 00:08:07.846 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:08.412 [ 00:08:08.412 { 00:08:08.413 "name": "Nvme0n1", 00:08:08.413 "aliases": [ 00:08:08.413 "7fc05948-fba2-4894-ac85-5aefa0bd3e02" 00:08:08.413 ], 00:08:08.413 "product_name": "NVMe disk", 00:08:08.413 "block_size": 4096, 00:08:08.413 "num_blocks": 38912, 00:08:08.413 "uuid": "7fc05948-fba2-4894-ac85-5aefa0bd3e02", 00:08:08.413 "numa_id": -1, 00:08:08.413 "assigned_rate_limits": { 00:08:08.413 "rw_ios_per_sec": 0, 00:08:08.413 "rw_mbytes_per_sec": 0, 00:08:08.413 "r_mbytes_per_sec": 0, 00:08:08.413 "w_mbytes_per_sec": 0 00:08:08.413 }, 00:08:08.413 "claimed": false, 00:08:08.413 "zoned": false, 00:08:08.413 "supported_io_types": { 00:08:08.413 "read": true, 00:08:08.413 "write": true, 00:08:08.413 "unmap": true, 00:08:08.413 "flush": true, 00:08:08.413 "reset": true, 00:08:08.413 "nvme_admin": true, 00:08:08.413 "nvme_io": true, 00:08:08.413 "nvme_io_md": false, 00:08:08.413 "write_zeroes": true, 00:08:08.413 "zcopy": false, 00:08:08.413 "get_zone_info": false, 00:08:08.413 "zone_management": false, 00:08:08.413 "zone_append": false, 00:08:08.413 "compare": true, 00:08:08.413 "compare_and_write": true, 00:08:08.413 "abort": true, 00:08:08.413 "seek_hole": false, 00:08:08.413 "seek_data": false, 00:08:08.413 "copy": true, 00:08:08.413 "nvme_iov_md": false 00:08:08.413 }, 00:08:08.413 "memory_domains": [ 00:08:08.413 { 00:08:08.413 "dma_device_id": "system", 00:08:08.413 "dma_device_type": 1 00:08:08.413 } 00:08:08.413 ], 00:08:08.413 "driver_specific": { 00:08:08.413 "nvme": [ 00:08:08.413 { 00:08:08.413 "trid": { 00:08:08.413 "trtype": "TCP", 00:08:08.413 "adrfam": "IPv4", 00:08:08.413 "traddr": "10.0.0.3", 00:08:08.413 "trsvcid": "4420", 00:08:08.413 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:08.413 }, 00:08:08.413 "ctrlr_data": { 00:08:08.413 "cntlid": 1, 00:08:08.413 "vendor_id": "0x8086", 00:08:08.413 "model_number": "SPDK bdev Controller", 00:08:08.413 "serial_number": "SPDK0", 00:08:08.413 "firmware_revision": "25.01", 00:08:08.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:08.413 "oacs": { 00:08:08.413 "security": 0, 00:08:08.413 "format": 0, 00:08:08.413 "firmware": 0, 00:08:08.413 "ns_manage": 0 00:08:08.413 }, 00:08:08.413 "multi_ctrlr": true, 00:08:08.413 "ana_reporting": false 00:08:08.413 }, 00:08:08.413 "vs": { 00:08:08.413 "nvme_version": "1.3" 00:08:08.413 }, 00:08:08.413 "ns_data": { 00:08:08.413 "id": 1, 00:08:08.413 "can_share": true 00:08:08.413 } 00:08:08.413 } 00:08:08.413 ], 00:08:08.413 "mp_policy": "active_passive" 00:08:08.413 } 00:08:08.413 } 00:08:08.413 ] 00:08:08.413 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63631 00:08:08.413 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:08.413 21:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:08.413 Running I/O for 10 seconds... 00:08:09.348 Latency(us) 00:08:09.348 [2024-12-10T21:35:10.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.348 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:09.348 [2024-12-10T21:35:10.131Z] =================================================================================================================== 00:08:09.348 [2024-12-10T21:35:10.131Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:09.348 00:08:10.283 21:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:10.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.283 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:10.283 [2024-12-10T21:35:11.066Z] =================================================================================================================== 00:08:10.283 [2024-12-10T21:35:11.066Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:10.283 00:08:10.850 true 00:08:10.850 21:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:10.850 21:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:11.108 21:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:11.109 21:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:11.109 21:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63631 00:08:11.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.367 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:11.367 [2024-12-10T21:35:12.150Z] =================================================================================================================== 00:08:11.367 [2024-12-10T21:35:12.150Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:11.367 00:08:12.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.300 Nvme0n1 : 4.00 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:12.300 [2024-12-10T21:35:13.083Z] =================================================================================================================== 00:08:12.300 [2024-12-10T21:35:13.083Z] Total : 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:12.300 00:08:13.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.674 Nvme0n1 : 5.00 6524.80 25.49 0.00 0.00 0.00 0.00 0.00 00:08:13.674 [2024-12-10T21:35:14.457Z] =================================================================================================================== 00:08:13.674 [2024-12-10T21:35:14.457Z] Total : 6524.80 25.49 0.00 0.00 0.00 0.00 0.00 00:08:13.674 00:08:14.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.609 Nvme0n1 : 6.00 6474.50 25.29 0.00 0.00 0.00 0.00 0.00 00:08:14.609 [2024-12-10T21:35:15.392Z] =================================================================================================================== 00:08:14.609 [2024-12-10T21:35:15.392Z] Total : 6474.50 25.29 0.00 0.00 0.00 0.00 0.00 00:08:14.609 00:08:15.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.544 Nvme0n1 : 7.00 6402.29 25.01 0.00 0.00 0.00 0.00 0.00 00:08:15.544 [2024-12-10T21:35:16.327Z] =================================================================================================================== 00:08:15.544 [2024-12-10T21:35:16.327Z] Total : 6402.29 25.01 0.00 0.00 0.00 0.00 0.00 00:08:15.544 00:08:16.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.551 Nvme0n1 : 8.00 6364.00 24.86 0.00 0.00 0.00 0.00 0.00 00:08:16.551 [2024-12-10T21:35:17.334Z] =================================================================================================================== 00:08:16.551 [2024-12-10T21:35:17.334Z] Total : 6364.00 24.86 0.00 0.00 0.00 0.00 0.00 00:08:16.551 00:08:17.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.486 Nvme0n1 : 9.00 6320.11 24.69 0.00 0.00 0.00 0.00 0.00 00:08:17.486 [2024-12-10T21:35:18.269Z] =================================================================================================================== 00:08:17.486 [2024-12-10T21:35:18.269Z] Total : 6320.11 24.69 0.00 0.00 0.00 0.00 0.00 00:08:17.486 00:08:18.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.420 Nvme0n1 : 10.00 6285.00 24.55 0.00 0.00 0.00 0.00 0.00 00:08:18.420 [2024-12-10T21:35:19.203Z] =================================================================================================================== 00:08:18.420 [2024-12-10T21:35:19.203Z] Total : 6285.00 24.55 0.00 0.00 0.00 0.00 0.00 00:08:18.420 00:08:18.420 00:08:18.420 Latency(us) 00:08:18.420 [2024-12-10T21:35:19.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.420 Nvme0n1 : 10.01 6292.96 24.58 0.00 0.00 20333.38 9770.82 170631.91 00:08:18.420 [2024-12-10T21:35:19.203Z] =================================================================================================================== 00:08:18.420 [2024-12-10T21:35:19.203Z] Total : 6292.96 24.58 0.00 0.00 20333.38 9770.82 170631.91 00:08:18.420 { 00:08:18.420 "results": [ 00:08:18.420 { 00:08:18.420 "job": "Nvme0n1", 00:08:18.420 "core_mask": "0x2", 00:08:18.420 "workload": "randwrite", 00:08:18.420 "status": "finished", 00:08:18.420 "queue_depth": 128, 00:08:18.420 "io_size": 4096, 00:08:18.420 "runtime": 10.007694, 00:08:18.420 "iops": 6292.958197962487, 00:08:18.420 "mibps": 24.581867960790966, 00:08:18.420 "io_failed": 0, 00:08:18.420 "io_timeout": 0, 00:08:18.420 "avg_latency_us": 20333.380798258553, 00:08:18.420 "min_latency_us": 9770.821818181817, 00:08:18.420 "max_latency_us": 170631.91272727272 00:08:18.420 } 00:08:18.420 ], 00:08:18.420 "core_count": 1 00:08:18.420 } 00:08:18.420 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63615 00:08:18.420 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63615 ']' 00:08:18.420 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63615 00:08:18.420 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:18.420 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.420 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63615 00:08:18.420 killing process with pid 63615 00:08:18.420 Received shutdown signal, test time was about 10.000000 seconds 00:08:18.420 00:08:18.420 Latency(us) 00:08:18.420 [2024-12-10T21:35:19.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.420 [2024-12-10T21:35:19.203Z] =================================================================================================================== 00:08:18.420 [2024-12-10T21:35:19.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:18.421 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:18.421 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:18.421 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63615' 00:08:18.421 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63615 00:08:18.421 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63615 00:08:18.679 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:18.938 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:19.196 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:19.196 21:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63243 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63243 00:08:19.763 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63243 Killed "${NVMF_APP[@]}" "$@" 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63770 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63770 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63770 ']' 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.763 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.763 [2024-12-10 21:35:20.352623] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:19.763 [2024-12-10 21:35:20.352758] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.763 [2024-12-10 21:35:20.499913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.021 [2024-12-10 21:35:20.547345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.021 [2024-12-10 21:35:20.547753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.021 [2024-12-10 21:35:20.547787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.021 [2024-12-10 21:35:20.547806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.021 [2024-12-10 21:35:20.547818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.021 [2024-12-10 21:35:20.548240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.021 [2024-12-10 21:35:20.585469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.021 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.021 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:20.021 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.021 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.021 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.021 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.021 21:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:20.614 [2024-12-10 21:35:21.138078] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:20.614 [2024-12-10 21:35:21.138309] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:20.614 [2024-12-10 21:35:21.138434] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7fc05948-fba2-4894-ac85-5aefa0bd3e02 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7fc05948-fba2-4894-ac85-5aefa0bd3e02 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.614 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:20.871 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7fc05948-fba2-4894-ac85-5aefa0bd3e02 -t 2000 00:08:21.437 [ 00:08:21.437 { 00:08:21.437 "name": "7fc05948-fba2-4894-ac85-5aefa0bd3e02", 00:08:21.437 "aliases": [ 00:08:21.437 "lvs/lvol" 00:08:21.437 ], 00:08:21.437 "product_name": "Logical Volume", 00:08:21.437 "block_size": 4096, 00:08:21.437 "num_blocks": 38912, 00:08:21.437 "uuid": "7fc05948-fba2-4894-ac85-5aefa0bd3e02", 00:08:21.437 "assigned_rate_limits": { 00:08:21.437 "rw_ios_per_sec": 0, 00:08:21.437 "rw_mbytes_per_sec": 0, 00:08:21.437 "r_mbytes_per_sec": 0, 00:08:21.437 "w_mbytes_per_sec": 0 00:08:21.437 }, 00:08:21.437 "claimed": false, 00:08:21.437 "zoned": false, 00:08:21.437 "supported_io_types": { 00:08:21.437 "read": true, 00:08:21.437 "write": true, 00:08:21.437 "unmap": true, 00:08:21.437 "flush": false, 00:08:21.437 "reset": true, 00:08:21.437 "nvme_admin": false, 00:08:21.437 "nvme_io": false, 00:08:21.437 "nvme_io_md": false, 00:08:21.437 "write_zeroes": true, 00:08:21.437 "zcopy": false, 00:08:21.437 "get_zone_info": false, 00:08:21.437 "zone_management": false, 00:08:21.437 "zone_append": false, 00:08:21.437 "compare": false, 00:08:21.437 "compare_and_write": false, 00:08:21.437 "abort": false, 00:08:21.437 "seek_hole": true, 00:08:21.437 "seek_data": true, 00:08:21.437 "copy": false, 00:08:21.437 "nvme_iov_md": false 00:08:21.437 }, 00:08:21.437 "driver_specific": { 00:08:21.437 "lvol": { 00:08:21.437 "lvol_store_uuid": "cbb1f2da-3a63-4776-b1d0-e69c5101bc34", 00:08:21.437 "base_bdev": "aio_bdev", 00:08:21.437 "thin_provision": false, 00:08:21.437 "num_allocated_clusters": 38, 00:08:21.437 "snapshot": false, 00:08:21.437 "clone": false, 00:08:21.437 "esnap_clone": false 00:08:21.437 } 00:08:21.437 } 00:08:21.437 } 00:08:21.437 ] 00:08:21.437 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:21.437 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:21.437 21:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:21.437 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:21.437 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:21.437 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:21.696 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:21.696 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:21.953 [2024-12-10 21:35:22.719825] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:22.212 21:35:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:22.470 request: 00:08:22.470 { 00:08:22.470 "uuid": "cbb1f2da-3a63-4776-b1d0-e69c5101bc34", 00:08:22.470 "method": "bdev_lvol_get_lvstores", 00:08:22.470 "req_id": 1 00:08:22.470 } 00:08:22.470 Got JSON-RPC error response 00:08:22.470 response: 00:08:22.470 { 00:08:22.470 "code": -19, 00:08:22.470 "message": "No such device" 00:08:22.470 } 00:08:22.470 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:22.470 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.470 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.470 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.470 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.728 aio_bdev 00:08:22.728 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7fc05948-fba2-4894-ac85-5aefa0bd3e02 00:08:22.728 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7fc05948-fba2-4894-ac85-5aefa0bd3e02 00:08:22.728 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.728 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:22.728 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.728 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.728 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.987 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7fc05948-fba2-4894-ac85-5aefa0bd3e02 -t 2000 00:08:23.246 [ 00:08:23.246 { 00:08:23.246 "name": "7fc05948-fba2-4894-ac85-5aefa0bd3e02", 00:08:23.246 "aliases": [ 00:08:23.246 "lvs/lvol" 00:08:23.246 ], 00:08:23.246 "product_name": "Logical Volume", 00:08:23.246 "block_size": 4096, 00:08:23.246 "num_blocks": 38912, 00:08:23.246 "uuid": "7fc05948-fba2-4894-ac85-5aefa0bd3e02", 00:08:23.246 "assigned_rate_limits": { 00:08:23.246 "rw_ios_per_sec": 0, 00:08:23.246 "rw_mbytes_per_sec": 0, 00:08:23.246 "r_mbytes_per_sec": 0, 00:08:23.246 "w_mbytes_per_sec": 0 00:08:23.246 }, 00:08:23.246 "claimed": false, 00:08:23.246 "zoned": false, 00:08:23.246 "supported_io_types": { 00:08:23.246 "read": true, 00:08:23.246 "write": true, 00:08:23.246 "unmap": true, 00:08:23.246 "flush": false, 00:08:23.246 "reset": true, 00:08:23.246 "nvme_admin": false, 00:08:23.246 "nvme_io": false, 00:08:23.246 "nvme_io_md": false, 00:08:23.246 "write_zeroes": true, 00:08:23.246 "zcopy": false, 00:08:23.246 "get_zone_info": false, 00:08:23.246 "zone_management": false, 00:08:23.246 "zone_append": false, 00:08:23.246 "compare": false, 00:08:23.246 "compare_and_write": false, 00:08:23.246 "abort": false, 00:08:23.246 "seek_hole": true, 00:08:23.246 "seek_data": true, 00:08:23.246 "copy": false, 00:08:23.246 "nvme_iov_md": false 00:08:23.246 }, 00:08:23.246 "driver_specific": { 00:08:23.246 "lvol": { 00:08:23.246 "lvol_store_uuid": "cbb1f2da-3a63-4776-b1d0-e69c5101bc34", 00:08:23.246 "base_bdev": "aio_bdev", 00:08:23.246 "thin_provision": false, 00:08:23.246 "num_allocated_clusters": 38, 00:08:23.246 "snapshot": false, 00:08:23.246 "clone": false, 00:08:23.246 "esnap_clone": false 00:08:23.246 } 00:08:23.246 } 00:08:23.246 } 00:08:23.246 ] 00:08:23.246 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:23.246 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:23.246 21:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:23.504 21:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:23.504 21:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.504 21:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:23.761 21:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.761 21:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7fc05948-fba2-4894-ac85-5aefa0bd3e02 00:08:24.327 21:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cbb1f2da-3a63-4776-b1d0-e69c5101bc34 00:08:24.584 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.842 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:25.103 00:08:25.103 real 0m21.086s 00:08:25.103 user 0m45.128s 00:08:25.103 sys 0m8.152s 00:08:25.103 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.103 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.103 ************************************ 00:08:25.103 END TEST lvs_grow_dirty 00:08:25.103 ************************************ 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:25.362 nvmf_trace.0 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.362 21:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:25.620 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.620 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:25.620 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.621 rmmod nvme_tcp 00:08:25.621 rmmod nvme_fabrics 00:08:25.621 rmmod nvme_keyring 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63770 ']' 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63770 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63770 ']' 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63770 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63770 00:08:25.621 killing process with pid 63770 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63770' 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63770 00:08:25.621 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63770 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:25.879 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:25.880 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:25.880 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.880 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.880 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:26.139 ************************************ 00:08:26.139 END TEST nvmf_lvs_grow 00:08:26.139 ************************************ 00:08:26.139 00:08:26.139 real 0m43.404s 00:08:26.139 user 1m10.758s 00:08:26.139 sys 0m11.632s 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.139 ************************************ 00:08:26.139 START TEST nvmf_bdev_io_wait 00:08:26.139 ************************************ 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:26.139 * Looking for test storage... 00:08:26.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.139 --rc genhtml_branch_coverage=1 00:08:26.139 --rc genhtml_function_coverage=1 00:08:26.139 --rc genhtml_legend=1 00:08:26.139 --rc geninfo_all_blocks=1 00:08:26.139 --rc geninfo_unexecuted_blocks=1 00:08:26.139 00:08:26.139 ' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.139 --rc genhtml_branch_coverage=1 00:08:26.139 --rc genhtml_function_coverage=1 00:08:26.139 --rc genhtml_legend=1 00:08:26.139 --rc geninfo_all_blocks=1 00:08:26.139 --rc geninfo_unexecuted_blocks=1 00:08:26.139 00:08:26.139 ' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.139 --rc genhtml_branch_coverage=1 00:08:26.139 --rc genhtml_function_coverage=1 00:08:26.139 --rc genhtml_legend=1 00:08:26.139 --rc geninfo_all_blocks=1 00:08:26.139 --rc geninfo_unexecuted_blocks=1 00:08:26.139 00:08:26.139 ' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.139 --rc genhtml_branch_coverage=1 00:08:26.139 --rc genhtml_function_coverage=1 00:08:26.139 --rc genhtml_legend=1 00:08:26.139 --rc geninfo_all_blocks=1 00:08:26.139 --rc geninfo_unexecuted_blocks=1 00:08:26.139 00:08:26.139 ' 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.139 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.399 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:26.399 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:26.400 Cannot find device "nvmf_init_br" 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:26.400 Cannot find device "nvmf_init_br2" 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:26.400 Cannot find device "nvmf_tgt_br" 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:26.400 Cannot find device "nvmf_tgt_br2" 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:26.400 21:35:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:26.400 Cannot find device "nvmf_init_br" 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:26.400 Cannot find device "nvmf_init_br2" 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:26.400 Cannot find device "nvmf_tgt_br" 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:26.400 Cannot find device "nvmf_tgt_br2" 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:26.400 Cannot find device "nvmf_br" 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:26.400 Cannot find device "nvmf_init_if" 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:26.400 Cannot find device "nvmf_init_if2" 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:26.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:26.400 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:26.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:26.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:26.659 00:08:26.659 --- 10.0.0.3 ping statistics --- 00:08:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.659 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:26.659 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:26.659 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:08:26.659 00:08:26.659 --- 10.0.0.4 ping statistics --- 00:08:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.659 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:26.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:26.659 00:08:26.659 --- 10.0.0.1 ping statistics --- 00:08:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.659 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:26.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:26.659 00:08:26.659 --- 10.0.0.2 ping statistics --- 00:08:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.659 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64140 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64140 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64140 ']' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.659 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.659 [2024-12-10 21:35:27.385562] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:26.659 [2024-12-10 21:35:27.385650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.918 [2024-12-10 21:35:27.537158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.918 [2024-12-10 21:35:27.572776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.918 [2024-12-10 21:35:27.572831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.918 [2024-12-10 21:35:27.572843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.918 [2024-12-10 21:35:27.572851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.918 [2024-12-10 21:35:27.572858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.918 [2024-12-10 21:35:27.573561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.918 [2024-12-10 21:35:27.573703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.918 [2024-12-10 21:35:27.573762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.918 [2024-12-10 21:35:27.573764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.918 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.918 [2024-12-10 21:35:27.699493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.178 [2024-12-10 21:35:27.710154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.178 Malloc0 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.178 [2024-12-10 21:35:27.752617] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64168 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64170 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.178 { 00:08:27.178 "params": { 00:08:27.178 "name": "Nvme$subsystem", 00:08:27.178 "trtype": "$TEST_TRANSPORT", 00:08:27.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.178 "adrfam": "ipv4", 00:08:27.178 "trsvcid": "$NVMF_PORT", 00:08:27.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.178 "hdgst": ${hdgst:-false}, 00:08:27.178 "ddgst": ${ddgst:-false} 00:08:27.178 }, 00:08:27.178 "method": "bdev_nvme_attach_controller" 00:08:27.178 } 00:08:27.178 EOF 00:08:27.178 )") 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64172 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64174 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.178 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.179 { 00:08:27.179 "params": { 00:08:27.179 "name": "Nvme$subsystem", 00:08:27.179 "trtype": "$TEST_TRANSPORT", 00:08:27.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.179 "adrfam": "ipv4", 00:08:27.179 "trsvcid": "$NVMF_PORT", 00:08:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.179 "hdgst": ${hdgst:-false}, 00:08:27.179 "ddgst": ${ddgst:-false} 00:08:27.179 }, 00:08:27.179 "method": "bdev_nvme_attach_controller" 00:08:27.179 } 00:08:27.179 EOF 00:08:27.179 )") 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.179 { 00:08:27.179 "params": { 00:08:27.179 "name": "Nvme$subsystem", 00:08:27.179 "trtype": "$TEST_TRANSPORT", 00:08:27.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.179 "adrfam": "ipv4", 00:08:27.179 "trsvcid": "$NVMF_PORT", 00:08:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.179 "hdgst": ${hdgst:-false}, 00:08:27.179 "ddgst": ${ddgst:-false} 00:08:27.179 }, 00:08:27.179 "method": "bdev_nvme_attach_controller" 00:08:27.179 } 00:08:27.179 EOF 00:08:27.179 )") 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.179 { 00:08:27.179 "params": { 00:08:27.179 "name": "Nvme$subsystem", 00:08:27.179 "trtype": "$TEST_TRANSPORT", 00:08:27.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.179 "adrfam": "ipv4", 00:08:27.179 "trsvcid": "$NVMF_PORT", 00:08:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.179 "hdgst": ${hdgst:-false}, 00:08:27.179 "ddgst": ${ddgst:-false} 00:08:27.179 }, 00:08:27.179 "method": "bdev_nvme_attach_controller" 00:08:27.179 } 00:08:27.179 EOF 00:08:27.179 )") 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.179 "params": { 00:08:27.179 "name": "Nvme1", 00:08:27.179 "trtype": "tcp", 00:08:27.179 "traddr": "10.0.0.3", 00:08:27.179 "adrfam": "ipv4", 00:08:27.179 "trsvcid": "4420", 00:08:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.179 "hdgst": false, 00:08:27.179 "ddgst": false 00:08:27.179 }, 00:08:27.179 "method": "bdev_nvme_attach_controller" 00:08:27.179 }' 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.179 "params": { 00:08:27.179 "name": "Nvme1", 00:08:27.179 "trtype": "tcp", 00:08:27.179 "traddr": "10.0.0.3", 00:08:27.179 "adrfam": "ipv4", 00:08:27.179 "trsvcid": "4420", 00:08:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.179 "hdgst": false, 00:08:27.179 "ddgst": false 00:08:27.179 }, 00:08:27.179 "method": "bdev_nvme_attach_controller" 00:08:27.179 }' 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.179 "params": { 00:08:27.179 "name": "Nvme1", 00:08:27.179 "trtype": "tcp", 00:08:27.179 "traddr": "10.0.0.3", 00:08:27.179 "adrfam": "ipv4", 00:08:27.179 "trsvcid": "4420", 00:08:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.179 "hdgst": false, 00:08:27.179 "ddgst": false 00:08:27.179 }, 00:08:27.179 "method": "bdev_nvme_attach_controller" 00:08:27.179 }' 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.179 "params": { 00:08:27.179 "name": "Nvme1", 00:08:27.179 "trtype": "tcp", 00:08:27.179 "traddr": "10.0.0.3", 00:08:27.179 "adrfam": "ipv4", 00:08:27.179 "trsvcid": "4420", 00:08:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.179 "hdgst": false, 00:08:27.179 "ddgst": false 00:08:27.179 }, 00:08:27.179 "method": "bdev_nvme_attach_controller" 00:08:27.179 }' 00:08:27.179 21:35:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64168 00:08:27.179 [2024-12-10 21:35:27.826862] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:27.179 [2024-12-10 21:35:27.827055] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:27.179 [2024-12-10 21:35:27.840405] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:27.179 [2024-12-10 21:35:27.840542] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:27.179 [2024-12-10 21:35:27.847428] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:27.179 [2024-12-10 21:35:27.847546] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:27.179 [2024-12-10 21:35:27.855076] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:27.179 [2024-12-10 21:35:27.855939] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:27.438 [2024-12-10 21:35:28.007076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.438 [2024-12-10 21:35:28.032937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.438 [2024-12-10 21:35:28.045904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.438 [2024-12-10 21:35:28.053641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.438 [2024-12-10 21:35:28.084409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:27.438 [2024-12-10 21:35:28.098457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.438 Running I/O for 1 seconds... 00:08:27.438 [2024-12-10 21:35:28.147756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.438 [2024-12-10 21:35:28.150839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.438 [2024-12-10 21:35:28.179052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:27.438 [2024-12-10 21:35:28.190654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:27.438 [2024-12-10 21:35:28.193043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.438 Running I/O for 1 seconds... 00:08:27.438 [2024-12-10 21:35:28.209602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.696 Running I/O for 1 seconds... 00:08:27.696 Running I/O for 1 seconds... 00:08:28.631 6442.00 IOPS, 25.16 MiB/s 00:08:28.631 Latency(us) 00:08:28.631 [2024-12-10T21:35:29.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.631 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:28.631 Nvme1n1 : 1.03 6396.62 24.99 0.00 0.00 19691.24 5957.82 36223.53 00:08:28.631 [2024-12-10T21:35:29.414Z] =================================================================================================================== 00:08:28.631 [2024-12-10T21:35:29.414Z] Total : 6396.62 24.99 0.00 0.00 19691.24 5957.82 36223.53 00:08:28.631 7502.00 IOPS, 29.30 MiB/s 00:08:28.631 Latency(us) 00:08:28.631 [2024-12-10T21:35:29.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.631 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:28.631 Nvme1n1 : 1.02 7508.12 29.33 0.00 0.00 16906.25 11617.75 32887.16 00:08:28.631 [2024-12-10T21:35:29.414Z] =================================================================================================================== 00:08:28.631 [2024-12-10T21:35:29.414Z] Total : 7508.12 29.33 0.00 0.00 16906.25 11617.75 32887.16 00:08:28.631 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64170 00:08:28.631 142472.00 IOPS, 556.53 MiB/s 00:08:28.631 Latency(us) 00:08:28.631 [2024-12-10T21:35:29.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.631 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:28.631 Nvme1n1 : 1.00 142095.21 555.06 0.00 0.00 895.64 525.03 2606.55 00:08:28.631 [2024-12-10T21:35:29.414Z] =================================================================================================================== 00:08:28.631 [2024-12-10T21:35:29.414Z] Total : 142095.21 555.06 0.00 0.00 895.64 525.03 2606.55 00:08:28.631 6373.00 IOPS, 24.89 MiB/s [2024-12-10T21:35:29.414Z] 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64172 00:08:28.631 00:08:28.631 Latency(us) 00:08:28.631 [2024-12-10T21:35:29.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.631 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:28.631 Nvme1n1 : 1.01 6505.46 25.41 0.00 0.00 19613.78 5332.25 41466.41 00:08:28.631 [2024-12-10T21:35:29.414Z] =================================================================================================================== 00:08:28.631 [2024-12-10T21:35:29.414Z] Total : 6505.46 25.41 0.00 0.00 19613.78 5332.25 41466.41 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64174 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.890 rmmod nvme_tcp 00:08:28.890 rmmod nvme_fabrics 00:08:28.890 rmmod nvme_keyring 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64140 ']' 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64140 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64140 ']' 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64140 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64140 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.890 killing process with pid 64140 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64140' 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64140 00:08:28.890 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64140 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.161 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.420 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:29.420 ************************************ 00:08:29.420 END TEST nvmf_bdev_io_wait 00:08:29.420 ************************************ 00:08:29.420 00:08:29.420 real 0m3.217s 00:08:29.420 user 0m12.980s 00:08:29.420 sys 0m2.075s 00:08:29.420 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.420 21:35:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.420 21:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:29.420 21:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.420 21:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.420 21:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.420 ************************************ 00:08:29.420 START TEST nvmf_queue_depth 00:08:29.420 ************************************ 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:29.420 * Looking for test storage... 00:08:29.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:29.420 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.421 --rc genhtml_branch_coverage=1 00:08:29.421 --rc genhtml_function_coverage=1 00:08:29.421 --rc genhtml_legend=1 00:08:29.421 --rc geninfo_all_blocks=1 00:08:29.421 --rc geninfo_unexecuted_blocks=1 00:08:29.421 00:08:29.421 ' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.421 --rc genhtml_branch_coverage=1 00:08:29.421 --rc genhtml_function_coverage=1 00:08:29.421 --rc genhtml_legend=1 00:08:29.421 --rc geninfo_all_blocks=1 00:08:29.421 --rc geninfo_unexecuted_blocks=1 00:08:29.421 00:08:29.421 ' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.421 --rc genhtml_branch_coverage=1 00:08:29.421 --rc genhtml_function_coverage=1 00:08:29.421 --rc genhtml_legend=1 00:08:29.421 --rc geninfo_all_blocks=1 00:08:29.421 --rc geninfo_unexecuted_blocks=1 00:08:29.421 00:08:29.421 ' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.421 --rc genhtml_branch_coverage=1 00:08:29.421 --rc genhtml_function_coverage=1 00:08:29.421 --rc genhtml_legend=1 00:08:29.421 --rc geninfo_all_blocks=1 00:08:29.421 --rc geninfo_unexecuted_blocks=1 00:08:29.421 00:08:29.421 ' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.421 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.421 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.422 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:29.681 Cannot find device "nvmf_init_br" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:29.681 Cannot find device "nvmf_init_br2" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:29.681 Cannot find device "nvmf_tgt_br" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.681 Cannot find device "nvmf_tgt_br2" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:29.681 Cannot find device "nvmf_init_br" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:29.681 Cannot find device "nvmf_init_br2" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:29.681 Cannot find device "nvmf_tgt_br" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:29.681 Cannot find device "nvmf_tgt_br2" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:29.681 Cannot find device "nvmf_br" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:29.681 Cannot find device "nvmf_init_if" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:29.681 Cannot find device "nvmf_init_if2" 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.681 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:29.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:08:29.940 00:08:29.940 --- 10.0.0.3 ping statistics --- 00:08:29.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.940 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:29.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:29.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:08:29.940 00:08:29.940 --- 10.0.0.4 ping statistics --- 00:08:29.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.940 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:29.940 00:08:29.940 --- 10.0.0.1 ping statistics --- 00:08:29.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.940 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:29.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:29.940 00:08:29.940 --- 10.0.0.2 ping statistics --- 00:08:29.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.940 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.940 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64436 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64436 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64436 ']' 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.941 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.941 [2024-12-10 21:35:30.643800] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:29.941 [2024-12-10 21:35:30.643884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.199 [2024-12-10 21:35:30.795334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.199 [2024-12-10 21:35:30.838888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.200 [2024-12-10 21:35:30.838967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.200 [2024-12-10 21:35:30.838986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.200 [2024-12-10 21:35:30.839000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.200 [2024-12-10 21:35:30.839012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.200 [2024-12-10 21:35:30.839418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.200 [2024-12-10 21:35:30.871797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.200 [2024-12-10 21:35:30.961594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.200 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.200 Malloc0 00:08:30.458 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.458 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.458 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.458 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.459 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.459 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.459 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.459 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:30.459 21:35:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 [2024-12-10 21:35:31.004461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64455 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64455 /var/tmp/bdevperf.sock 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64455 ']' 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.459 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 [2024-12-10 21:35:31.064730] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:30.459 [2024-12-10 21:35:31.064870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64455 ] 00:08:30.459 [2024-12-10 21:35:31.220682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.717 [2024-12-10 21:35:31.261671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.717 [2024-12-10 21:35:31.295349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.717 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.717 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:30.717 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:30.717 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.717 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 NVMe0n1 00:08:30.717 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.717 21:35:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.976 Running I/O for 10 seconds... 00:08:32.844 6159.00 IOPS, 24.06 MiB/s [2024-12-10T21:35:35.001Z] 6670.00 IOPS, 26.05 MiB/s [2024-12-10T21:35:35.934Z] 6826.67 IOPS, 26.67 MiB/s [2024-12-10T21:35:36.868Z] 6827.25 IOPS, 26.67 MiB/s [2024-12-10T21:35:37.801Z] 6796.00 IOPS, 26.55 MiB/s [2024-12-10T21:35:38.759Z] 6955.83 IOPS, 27.17 MiB/s [2024-12-10T21:35:39.694Z] 7038.29 IOPS, 27.49 MiB/s [2024-12-10T21:35:40.628Z] 7169.00 IOPS, 28.00 MiB/s [2024-12-10T21:35:42.004Z] 7213.22 IOPS, 28.18 MiB/s [2024-12-10T21:35:42.004Z] 7299.40 IOPS, 28.51 MiB/s 00:08:41.221 Latency(us) 00:08:41.221 [2024-12-10T21:35:42.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.221 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:41.221 Verification LBA range: start 0x0 length 0x4000 00:08:41.221 NVMe0n1 : 10.08 7341.59 28.68 0.00 0.00 138820.13 18707.55 111530.36 00:08:41.221 [2024-12-10T21:35:42.004Z] =================================================================================================================== 00:08:41.221 [2024-12-10T21:35:42.004Z] Total : 7341.59 28.68 0.00 0.00 138820.13 18707.55 111530.36 00:08:41.221 { 00:08:41.221 "results": [ 00:08:41.221 { 00:08:41.221 "job": "NVMe0n1", 00:08:41.221 "core_mask": "0x1", 00:08:41.221 "workload": "verify", 00:08:41.221 "status": "finished", 00:08:41.221 "verify_range": { 00:08:41.221 "start": 0, 00:08:41.221 "length": 16384 00:08:41.221 }, 00:08:41.221 "queue_depth": 1024, 00:08:41.221 "io_size": 4096, 00:08:41.221 "runtime": 10.077648, 00:08:41.221 "iops": 7341.593990978848, 00:08:41.221 "mibps": 28.678101527261123, 00:08:41.221 "io_failed": 0, 00:08:41.221 "io_timeout": 0, 00:08:41.221 "avg_latency_us": 138820.13425124655, 00:08:41.221 "min_latency_us": 18707.54909090909, 00:08:41.221 "max_latency_us": 111530.35636363637 00:08:41.221 } 00:08:41.221 ], 00:08:41.221 "core_count": 1 00:08:41.221 } 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64455 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64455 ']' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64455 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64455 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.221 killing process with pid 64455 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64455' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64455 00:08:41.221 Received shutdown signal, test time was about 10.000000 seconds 00:08:41.221 00:08:41.221 Latency(us) 00:08:41.221 [2024-12-10T21:35:42.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.221 [2024-12-10T21:35:42.004Z] =================================================================================================================== 00:08:41.221 [2024-12-10T21:35:42.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64455 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.221 rmmod nvme_tcp 00:08:41.221 rmmod nvme_fabrics 00:08:41.221 rmmod nvme_keyring 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64436 ']' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64436 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64436 ']' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64436 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64436 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.221 killing process with pid 64436 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64436' 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64436 00:08:41.221 21:35:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64436 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:41.480 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:41.739 00:08:41.739 real 0m12.355s 00:08:41.739 user 0m21.100s 00:08:41.739 sys 0m2.153s 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.739 ************************************ 00:08:41.739 END TEST nvmf_queue_depth 00:08:41.739 ************************************ 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.739 ************************************ 00:08:41.739 START TEST nvmf_target_multipath 00:08:41.739 ************************************ 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:41.739 * Looking for test storage... 00:08:41.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.739 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.999 --rc genhtml_branch_coverage=1 00:08:41.999 --rc genhtml_function_coverage=1 00:08:41.999 --rc genhtml_legend=1 00:08:41.999 --rc geninfo_all_blocks=1 00:08:41.999 --rc geninfo_unexecuted_blocks=1 00:08:41.999 00:08:41.999 ' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.999 --rc genhtml_branch_coverage=1 00:08:41.999 --rc genhtml_function_coverage=1 00:08:41.999 --rc genhtml_legend=1 00:08:41.999 --rc geninfo_all_blocks=1 00:08:41.999 --rc geninfo_unexecuted_blocks=1 00:08:41.999 00:08:41.999 ' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.999 --rc genhtml_branch_coverage=1 00:08:41.999 --rc genhtml_function_coverage=1 00:08:41.999 --rc genhtml_legend=1 00:08:41.999 --rc geninfo_all_blocks=1 00:08:41.999 --rc geninfo_unexecuted_blocks=1 00:08:41.999 00:08:41.999 ' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.999 --rc genhtml_branch_coverage=1 00:08:41.999 --rc genhtml_function_coverage=1 00:08:41.999 --rc genhtml_legend=1 00:08:41.999 --rc geninfo_all_blocks=1 00:08:41.999 --rc geninfo_unexecuted_blocks=1 00:08:41.999 00:08:41.999 ' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.999 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.000 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:42.000 Cannot find device "nvmf_init_br" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:42.000 Cannot find device "nvmf_init_br2" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:42.000 Cannot find device "nvmf_tgt_br" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.000 Cannot find device "nvmf_tgt_br2" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:42.000 Cannot find device "nvmf_init_br" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:42.000 Cannot find device "nvmf_init_br2" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:42.000 Cannot find device "nvmf_tgt_br" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:42.000 Cannot find device "nvmf_tgt_br2" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:42.000 Cannot find device "nvmf_br" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:42.000 Cannot find device "nvmf_init_if" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:42.000 Cannot find device "nvmf_init_if2" 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.000 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:42.259 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.259 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:08:42.259 00:08:42.259 --- 10.0.0.3 ping statistics --- 00:08:42.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.259 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:42.259 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:42.259 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:08:42.259 00:08:42.259 --- 10.0.0.4 ping statistics --- 00:08:42.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.259 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:42.259 00:08:42.259 --- 10.0.0.1 ping statistics --- 00:08:42.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.259 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:42.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:42.259 00:08:42.259 --- 10.0.0.2 ping statistics --- 00:08:42.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.259 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.259 21:35:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64819 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64819 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64819 ']' 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.259 21:35:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:42.518 [2024-12-10 21:35:43.063077] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:08:42.518 [2024-12-10 21:35:43.063736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.518 [2024-12-10 21:35:43.216412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.518 [2024-12-10 21:35:43.257224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.518 [2024-12-10 21:35:43.257288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.518 [2024-12-10 21:35:43.257302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.518 [2024-12-10 21:35:43.257312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.518 [2024-12-10 21:35:43.257321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.518 [2024-12-10 21:35:43.258186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.518 [2024-12-10 21:35:43.258242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.518 [2024-12-10 21:35:43.258340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.518 [2024-12-10 21:35:43.258334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.518 [2024-12-10 21:35:43.291640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.500 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.500 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:43.500 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.500 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.500 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.500 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.500 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.759 [2024-12-10 21:35:44.343607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.759 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:44.018 Malloc0 00:08:44.018 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:44.277 21:35:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.534 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:44.791 [2024-12-10 21:35:45.415484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:44.791 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:45.050 [2024-12-10 21:35:45.683731] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:45.050 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:45.309 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:45.309 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.309 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:45.309 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.309 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:45.309 21:35:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:47.211 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:47.211 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:47.211 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.471 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:47.471 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.471 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:47.471 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:47.471 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:47.471 21:35:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64914 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:47.471 21:35:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:47.471 [global] 00:08:47.471 thread=1 00:08:47.471 invalidate=1 00:08:47.471 rw=randrw 00:08:47.471 time_based=1 00:08:47.471 runtime=6 00:08:47.471 ioengine=libaio 00:08:47.471 direct=1 00:08:47.471 bs=4096 00:08:47.471 iodepth=128 00:08:47.471 norandommap=0 00:08:47.471 numjobs=1 00:08:47.471 00:08:47.471 verify_dump=1 00:08:47.471 verify_backlog=512 00:08:47.471 verify_state_save=0 00:08:47.471 do_verify=1 00:08:47.471 verify=crc32c-intel 00:08:47.471 [job0] 00:08:47.471 filename=/dev/nvme0n1 00:08:47.471 Could not set queue depth (nvme0n1) 00:08:47.471 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:47.471 fio-3.35 00:08:47.471 Starting 1 thread 00:08:48.433 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:48.692 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:48.950 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:48.950 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:48.950 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:48.951 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:49.209 21:35:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:49.468 21:35:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64914 00:08:53.657 00:08:53.657 job0: (groupid=0, jobs=1): err= 0: pid=64935: Tue Dec 10 21:35:54 2024 00:08:53.657 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(238MiB/6006msec) 00:08:53.657 slat (usec): min=6, max=5906, avg=57.18, stdev=222.83 00:08:53.657 clat (usec): min=1410, max=16181, avg=8567.46, stdev=1556.74 00:08:53.657 lat (usec): min=1435, max=16196, avg=8624.64, stdev=1561.60 00:08:53.657 clat percentiles (usec): 00:08:53.657 | 1.00th=[ 4490], 5.00th=[ 6390], 10.00th=[ 7308], 20.00th=[ 7767], 00:08:53.657 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:08:53.657 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[12256], 00:08:53.657 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14877], 99.95th=[15533], 00:08:53.657 | 99.99th=[16057] 00:08:53.657 bw ( KiB/s): min= 9536, max=27448, per=52.06%, avg=21102.55, stdev=4912.10, samples=11 00:08:53.657 iops : min= 2384, max= 6862, avg=5275.64, stdev=1228.02, samples=11 00:08:53.657 write: IOPS=5889, BW=23.0MiB/s (24.1MB/s)(127MiB/5513msec); 0 zone resets 00:08:53.657 slat (usec): min=15, max=1965, avg=66.59, stdev=153.49 00:08:53.657 clat (usec): min=993, max=15991, avg=7403.55, stdev=1349.35 00:08:53.657 lat (usec): min=1025, max=16016, avg=7470.14, stdev=1354.80 00:08:53.657 clat percentiles (usec): 00:08:53.657 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 5604], 20.00th=[ 6849], 00:08:53.657 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7767], 00:08:53.657 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8848], 00:08:53.657 | 99.00th=[11469], 99.50th=[12256], 99.90th=[13698], 99.95th=[14091], 00:08:53.657 | 99.99th=[15008] 00:08:53.657 bw ( KiB/s): min= 9840, max=26600, per=89.64%, avg=21116.18, stdev=4626.69, samples=11 00:08:53.657 iops : min= 2460, max= 6650, avg=5279.00, stdev=1156.68, samples=11 00:08:53.657 lat (usec) : 1000=0.01% 00:08:53.657 lat (msec) : 2=0.02%, 4=1.29%, 10=91.35%, 20=7.33% 00:08:53.657 cpu : usr=5.63%, sys=24.18%, ctx=5398, majf=0, minf=54 00:08:53.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:53.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.657 issued rwts: total=60866,32468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.657 00:08:53.657 Run status group 0 (all jobs): 00:08:53.657 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=238MiB (249MB), run=6006-6006msec 00:08:53.657 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=127MiB (133MB), run=5513-5513msec 00:08:53.657 00:08:53.657 Disk stats (read/write): 00:08:53.657 nvme0n1: ios=59986/31862, merge=0/0, ticks=491118/220268, in_queue=711386, util=98.56% 00:08:53.657 21:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:53.916 21:35:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65017 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:54.483 21:35:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:54.483 [global] 00:08:54.483 thread=1 00:08:54.483 invalidate=1 00:08:54.483 rw=randrw 00:08:54.483 time_based=1 00:08:54.483 runtime=6 00:08:54.483 ioengine=libaio 00:08:54.483 direct=1 00:08:54.483 bs=4096 00:08:54.483 iodepth=128 00:08:54.483 norandommap=0 00:08:54.483 numjobs=1 00:08:54.483 00:08:54.483 verify_dump=1 00:08:54.483 verify_backlog=512 00:08:54.483 verify_state_save=0 00:08:54.483 do_verify=1 00:08:54.483 verify=crc32c-intel 00:08:54.483 [job0] 00:08:54.483 filename=/dev/nvme0n1 00:08:54.483 Could not set queue depth (nvme0n1) 00:08:54.483 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.483 fio-3.35 00:08:54.483 Starting 1 thread 00:08:55.417 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:55.675 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:55.934 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:56.192 21:35:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:56.758 21:35:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65017 00:09:00.963 00:09:00.963 job0: (groupid=0, jobs=1): err= 0: pid=65042: Tue Dec 10 21:36:01 2024 00:09:00.963 read: IOPS=11.3k, BW=44.1MiB/s (46.3MB/s)(265MiB/6006msec) 00:09:00.963 slat (usec): min=3, max=6945, avg=42.16, stdev=180.55 00:09:00.963 clat (usec): min=315, max=17000, avg=7607.54, stdev=2242.33 00:09:00.963 lat (usec): min=328, max=17017, avg=7649.70, stdev=2253.68 00:09:00.963 clat percentiles (usec): 00:09:00.963 | 1.00th=[ 1156], 5.00th=[ 3261], 10.00th=[ 4555], 20.00th=[ 5866], 00:09:00.963 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:00.963 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[11338], 00:09:00.963 | 99.00th=[13042], 99.50th=[13435], 99.90th=[15926], 99.95th=[16057], 00:09:00.963 | 99.99th=[16581] 00:09:00.963 bw ( KiB/s): min=13480, max=34976, per=55.19%, avg=24948.73, stdev=7495.38, samples=11 00:09:00.963 iops : min= 3370, max= 8744, avg=6237.18, stdev=1873.84, samples=11 00:09:00.963 write: IOPS=7058, BW=27.6MiB/s (28.9MB/s)(147MiB/5332msec); 0 zone resets 00:09:00.963 slat (usec): min=5, max=1534, avg=57.32, stdev=123.54 00:09:00.963 clat (usec): min=742, max=17433, avg=6541.88, stdev=1868.71 00:09:00.963 lat (usec): min=955, max=17465, avg=6599.21, stdev=1881.78 00:09:00.963 clat percentiles (usec): 00:09:00.963 | 1.00th=[ 2376], 5.00th=[ 3294], 10.00th=[ 3785], 20.00th=[ 4555], 00:09:00.963 | 30.00th=[ 5473], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7439], 00:09:00.963 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8356], 95.00th=[ 8848], 00:09:00.963 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12911], 99.95th=[13435], 00:09:00.963 | 99.99th=[14746] 00:09:00.963 bw ( KiB/s): min=14072, max=35816, per=88.34%, avg=24941.55, stdev=7225.59, samples=11 00:09:00.963 iops : min= 3518, max= 8954, avg=6235.36, stdev=1806.39, samples=11 00:09:00.963 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.28% 00:09:00.963 lat (msec) : 2=1.25%, 4=7.54%, 10=84.89%, 20=5.98% 00:09:00.963 cpu : usr=6.23%, sys=28.17%, ctx=6890, majf=0, minf=139 00:09:00.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:00.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.963 issued rwts: total=67872,37634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.963 00:09:00.963 Run status group 0 (all jobs): 00:09:00.963 READ: bw=44.1MiB/s (46.3MB/s), 44.1MiB/s-44.1MiB/s (46.3MB/s-46.3MB/s), io=265MiB (278MB), run=6006-6006msec 00:09:00.963 WRITE: bw=27.6MiB/s (28.9MB/s), 27.6MiB/s-27.6MiB/s (28.9MB/s-28.9MB/s), io=147MiB (154MB), run=5332-5332msec 00:09:00.963 00:09:00.963 Disk stats (read/write): 00:09:00.963 nvme0n1: ios=66995/37014, merge=0/0, ticks=477234/217767, in_queue=695001, util=98.65% 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.963 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.963 rmmod nvme_tcp 00:09:00.964 rmmod nvme_fabrics 00:09:00.964 rmmod nvme_keyring 00:09:00.964 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64819 ']' 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64819 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64819 ']' 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64819 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64819 00:09:01.222 killing process with pid 64819 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64819' 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64819 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64819 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:01.222 21:36:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:01.481 00:09:01.481 real 0m19.774s 00:09:01.481 user 1m14.436s 00:09:01.481 sys 0m9.682s 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:01.481 ************************************ 00:09:01.481 END TEST nvmf_target_multipath 00:09:01.481 ************************************ 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.481 ************************************ 00:09:01.481 START TEST nvmf_zcopy 00:09:01.481 ************************************ 00:09:01.481 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.740 * Looking for test storage... 00:09:01.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.740 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:01.740 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:01.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.741 --rc genhtml_branch_coverage=1 00:09:01.741 --rc genhtml_function_coverage=1 00:09:01.741 --rc genhtml_legend=1 00:09:01.741 --rc geninfo_all_blocks=1 00:09:01.741 --rc geninfo_unexecuted_blocks=1 00:09:01.741 00:09:01.741 ' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:01.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.741 --rc genhtml_branch_coverage=1 00:09:01.741 --rc genhtml_function_coverage=1 00:09:01.741 --rc genhtml_legend=1 00:09:01.741 --rc geninfo_all_blocks=1 00:09:01.741 --rc geninfo_unexecuted_blocks=1 00:09:01.741 00:09:01.741 ' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:01.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.741 --rc genhtml_branch_coverage=1 00:09:01.741 --rc genhtml_function_coverage=1 00:09:01.741 --rc genhtml_legend=1 00:09:01.741 --rc geninfo_all_blocks=1 00:09:01.741 --rc geninfo_unexecuted_blocks=1 00:09:01.741 00:09:01.741 ' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:01.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.741 --rc genhtml_branch_coverage=1 00:09:01.741 --rc genhtml_function_coverage=1 00:09:01.741 --rc genhtml_legend=1 00:09:01.741 --rc geninfo_all_blocks=1 00:09:01.741 --rc geninfo_unexecuted_blocks=1 00:09:01.741 00:09:01.741 ' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.741 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:01.741 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:01.742 Cannot find device "nvmf_init_br" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:01.742 Cannot find device "nvmf_init_br2" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:01.742 Cannot find device "nvmf_tgt_br" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.742 Cannot find device "nvmf_tgt_br2" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:01.742 Cannot find device "nvmf_init_br" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:01.742 Cannot find device "nvmf_init_br2" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:01.742 Cannot find device "nvmf_tgt_br" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:01.742 Cannot find device "nvmf_tgt_br2" 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:01.742 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:02.000 Cannot find device "nvmf_br" 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:02.000 Cannot find device "nvmf_init_if" 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:02.000 Cannot find device "nvmf_init_if2" 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:02.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:02.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.000 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:02.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:02.260 00:09:02.260 --- 10.0.0.3 ping statistics --- 00:09:02.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.260 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:02.260 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:02.260 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:09:02.260 00:09:02.260 --- 10.0.0.4 ping statistics --- 00:09:02.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.260 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:02.260 00:09:02.260 --- 10.0.0.1 ping statistics --- 00:09:02.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.260 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:02.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:02.260 00:09:02.260 --- 10.0.0.2 ping statistics --- 00:09:02.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.260 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65343 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65343 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65343 ']' 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.260 21:36:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.260 [2024-12-10 21:36:02.917872] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:09:02.260 [2024-12-10 21:36:02.917957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.518 [2024-12-10 21:36:03.065425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.518 [2024-12-10 21:36:03.097341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.518 [2024-12-10 21:36:03.097392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.518 [2024-12-10 21:36:03.097404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.518 [2024-12-10 21:36:03.097412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.518 [2024-12-10 21:36:03.097419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.518 [2024-12-10 21:36:03.097722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.518 [2024-12-10 21:36:03.127031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.460 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.460 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:03.460 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.460 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 [2024-12-10 21:36:03.956360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 [2024-12-10 21:36:03.972560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 malloc0 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.461 21:36:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.461 { 00:09:03.461 "params": { 00:09:03.461 "name": "Nvme$subsystem", 00:09:03.461 "trtype": "$TEST_TRANSPORT", 00:09:03.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.461 "adrfam": "ipv4", 00:09:03.461 "trsvcid": "$NVMF_PORT", 00:09:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.461 "hdgst": ${hdgst:-false}, 00:09:03.461 "ddgst": ${ddgst:-false} 00:09:03.461 }, 00:09:03.461 "method": "bdev_nvme_attach_controller" 00:09:03.461 } 00:09:03.461 EOF 00:09:03.461 )") 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:03.461 21:36:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.461 "params": { 00:09:03.461 "name": "Nvme1", 00:09:03.461 "trtype": "tcp", 00:09:03.461 "traddr": "10.0.0.3", 00:09:03.461 "adrfam": "ipv4", 00:09:03.461 "trsvcid": "4420", 00:09:03.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.461 "hdgst": false, 00:09:03.461 "ddgst": false 00:09:03.461 }, 00:09:03.461 "method": "bdev_nvme_attach_controller" 00:09:03.461 }' 00:09:03.461 [2024-12-10 21:36:04.055119] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:09:03.461 [2024-12-10 21:36:04.055197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65376 ] 00:09:03.461 [2024-12-10 21:36:04.203876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.756 [2024-12-10 21:36:04.242836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.756 [2024-12-10 21:36:04.287917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.756 Running I/O for 10 seconds... 00:09:05.688 5700.00 IOPS, 44.53 MiB/s [2024-12-10T21:36:07.844Z] 5766.50 IOPS, 45.05 MiB/s [2024-12-10T21:36:08.408Z] 5732.33 IOPS, 44.78 MiB/s [2024-12-10T21:36:09.781Z] 5738.75 IOPS, 44.83 MiB/s [2024-12-10T21:36:10.716Z] 5763.00 IOPS, 45.02 MiB/s [2024-12-10T21:36:11.652Z] 5674.67 IOPS, 44.33 MiB/s [2024-12-10T21:36:12.594Z] 5668.71 IOPS, 44.29 MiB/s [2024-12-10T21:36:13.552Z] 5686.50 IOPS, 44.43 MiB/s [2024-12-10T21:36:14.488Z] 5699.33 IOPS, 44.53 MiB/s [2024-12-10T21:36:14.488Z] 5712.60 IOPS, 44.63 MiB/s 00:09:13.705 Latency(us) 00:09:13.705 [2024-12-10T21:36:14.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.705 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:13.705 Verification LBA range: start 0x0 length 0x1000 00:09:13.705 Nvme1n1 : 10.02 5714.80 44.65 0.00 0.00 22324.70 1735.21 35746.91 00:09:13.705 [2024-12-10T21:36:14.488Z] =================================================================================================================== 00:09:13.705 [2024-12-10T21:36:14.488Z] Total : 5714.80 44.65 0.00 0.00 22324.70 1735.21 35746.91 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65498 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.964 { 00:09:13.964 "params": { 00:09:13.964 "name": "Nvme$subsystem", 00:09:13.964 "trtype": "$TEST_TRANSPORT", 00:09:13.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.964 "adrfam": "ipv4", 00:09:13.964 "trsvcid": "$NVMF_PORT", 00:09:13.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.964 "hdgst": ${hdgst:-false}, 00:09:13.964 "ddgst": ${ddgst:-false} 00:09:13.964 }, 00:09:13.964 "method": "bdev_nvme_attach_controller" 00:09:13.964 } 00:09:13.964 EOF 00:09:13.964 )") 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:13.964 [2024-12-10 21:36:14.563365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.563410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:13.964 21:36:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.964 "params": { 00:09:13.964 "name": "Nvme1", 00:09:13.964 "trtype": "tcp", 00:09:13.964 "traddr": "10.0.0.3", 00:09:13.964 "adrfam": "ipv4", 00:09:13.964 "trsvcid": "4420", 00:09:13.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.964 "hdgst": false, 00:09:13.964 "ddgst": false 00:09:13.964 }, 00:09:13.964 "method": "bdev_nvme_attach_controller" 00:09:13.964 }' 00:09:13.964 [2024-12-10 21:36:14.579347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.579388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.587331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.587487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.599367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.599412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.611349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.611387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.616098] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:09:13.964 [2024-12-10 21:36:14.616186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65498 ] 00:09:13.964 [2024-12-10 21:36:14.623333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.623363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.635347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.635376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.647365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.647399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.659359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.659392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.964 [2024-12-10 21:36:14.671364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.964 [2024-12-10 21:36:14.671399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.965 [2024-12-10 21:36:14.683364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.965 [2024-12-10 21:36:14.683398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.965 [2024-12-10 21:36:14.695365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.965 [2024-12-10 21:36:14.695396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.965 [2024-12-10 21:36:14.707370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.965 [2024-12-10 21:36:14.707401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.965 [2024-12-10 21:36:14.719407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.965 [2024-12-10 21:36:14.719469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.965 [2024-12-10 21:36:14.731422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.965 [2024-12-10 21:36:14.731496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.965 [2024-12-10 21:36:14.743401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.965 [2024-12-10 21:36:14.743435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.755389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.755421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.763405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.763458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.769000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.224 [2024-12-10 21:36:14.775439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.775500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.783427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.783481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.795457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.795517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.807438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.807500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.819437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.819488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.824561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.224 [2024-12-10 21:36:14.831433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.831483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.843470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.843516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.855485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.855541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.863171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.224 [2024-12-10 21:36:14.867479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.867525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.879499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.879556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.891487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.891541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.903506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.903545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.911513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.911545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.923529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.923566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.931579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.931615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.943587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.943635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.955582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.955620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 Running I/O for 5 seconds... 00:09:14.224 [2024-12-10 21:36:14.967585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.967618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:14.985088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:14.985263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.224 [2024-12-10 21:36:15.000290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.224 [2024-12-10 21:36:15.000543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.016721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.016760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.026413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.026460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.040777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.040816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.050767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.050803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.065829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.065868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.076234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.076394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.091231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.091383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.107066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.107359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.117117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.117318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.129062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.129212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.140361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.140528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.156224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.156396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.173335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.173512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.183683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.183832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.195420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.195598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.206833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.206985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.222876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.223028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.239870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.240021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.250303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.250464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.261871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.262021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.498 [2024-12-10 21:36:15.272886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.498 [2024-12-10 21:36:15.273038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.288067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.288219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.305308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.305471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.315364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.315521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.327042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.327191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.341820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.341971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.352421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.352604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.367782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.368009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.383170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.383323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.399386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.399556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.409412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.409583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.421069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.421226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.436528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.436573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.454327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.454369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.465024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.465060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.476792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.476830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.487600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.487655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.500400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.500457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.518032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.518074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.757 [2024-12-10 21:36:15.534085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.757 [2024-12-10 21:36:15.534125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.551823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.551863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.562588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.562641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.577332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.577375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.594761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.594798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.611637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.611673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.627997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.628040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.645517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.645561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.661979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.662014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.679255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.679296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.689665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.689700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.704512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.704550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.719554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.719594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.735107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.735143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.744968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.745013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.756805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.756839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.767773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.767806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.016 [2024-12-10 21:36:15.784793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.016 [2024-12-10 21:36:15.784827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.802672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.802731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.817324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.817376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.832676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.832708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.842769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.842801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.858835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.858868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.875406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.875439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.893169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.893203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.903209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.903242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.918002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.918035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.935145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.935202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.951149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.951184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.960512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.960544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 11318.00 IOPS, 88.42 MiB/s [2024-12-10T21:36:16.058Z] [2024-12-10 21:36:15.976616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.976647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:15.987070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:15.987102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:16.001767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:16.001801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:16.011934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:16.011969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:16.023584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:16.023615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:16.038672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:16.038705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.275 [2024-12-10 21:36:16.054092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.275 [2024-12-10 21:36:16.054127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.063305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.063340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.076933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.076982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.087820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.087867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.098689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.098722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.110007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.110056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.126147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.126179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.143334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.143368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.153119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.153152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.168183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.168222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.179131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.179170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.194225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.194262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.211469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.211503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.221583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.221617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.236080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.236119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.252093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.252128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.261677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.261709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.276485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.276518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.287037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.287069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.302458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.302496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.319295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.319330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.329420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.329467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.344019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.344053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.584 [2024-12-10 21:36:16.354274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.584 [2024-12-10 21:36:16.354305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.369173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.369207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.386027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.386063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.401619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.401655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.412023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.412059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.424077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.424127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.434877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.434911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.445677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.445727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.456786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.456835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.474112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.474146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.491981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.492014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.502492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.502524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.517210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.517247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.535257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.535295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.545461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.545493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.557242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.557279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.572228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.572268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.589725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.589761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.605855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.605889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.843 [2024-12-10 21:36:16.615250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.843 [2024-12-10 21:36:16.615284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.627555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.627587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.638226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.638259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.652944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.652977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.663614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.663646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.678002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.678035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.688236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.688269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.703079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.703113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.718984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.719018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.728829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.728861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.744414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.744459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.754861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.754894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.769327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.769359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.779602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.779639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.794137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.794170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.803876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.803908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.815846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.815879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.826571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.826613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.841222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.841256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.858938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.858987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.873743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.873781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.103 [2024-12-10 21:36:16.883291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.103 [2024-12-10 21:36:16.883324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.894993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.895026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.905795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.905827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.918568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.918600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.929138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.929171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.943472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.943504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.959722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.959757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 11398.00 IOPS, 89.05 MiB/s [2024-12-10T21:36:17.145Z] [2024-12-10 21:36:16.969560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.969593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.984912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.984946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:16.994863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:16.994895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.009770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.009831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.020385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.020438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.035060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.035125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.051725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.051770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.069760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.069832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.085070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.085129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.095144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.095189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.106734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.106769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.362 [2024-12-10 21:36:17.117520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.362 [2024-12-10 21:36:17.117562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.363 [2024-12-10 21:36:17.132921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.363 [2024-12-10 21:36:17.132959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.148838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.148885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.159179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.159212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.174129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.174179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.191336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.191372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.201417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.201467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.214016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.214051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.229279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.229313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.245173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.245212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.254472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.254506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.266514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.621 [2024-12-10 21:36:17.266545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.621 [2024-12-10 21:36:17.277436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.277498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.292663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.292695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.302886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.302919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.317677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.317707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.328293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.328339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.339719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.339752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.355659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.355691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.366182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.366213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.377648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.377681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.622 [2024-12-10 21:36:17.393742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.622 [2024-12-10 21:36:17.393775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.410221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.410256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.425479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.425511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.441682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.441714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.451304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.451336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.466221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.466254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.475877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.475910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.488083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.488131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.499636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.499669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.513255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.513288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.529361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.529396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.547271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.547319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.562130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.562164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.571581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.571615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.583512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.583548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.598652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.598691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.615977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.616012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.632301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.632344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.642062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.642109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.881 [2024-12-10 21:36:17.657232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.881 [2024-12-10 21:36:17.657291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.673836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.673894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.692798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.692865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.707833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.707886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.717950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.717996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.733195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.733253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.750091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.750151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.766930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.766988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.781814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.781866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.797502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.797558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.815093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.815132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.831756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.831805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.841527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.841561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.852987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.853023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.864170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.864210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.875371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.140 [2024-12-10 21:36:17.875405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.140 [2024-12-10 21:36:17.886033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.141 [2024-12-10 21:36:17.886081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.141 [2024-12-10 21:36:17.901218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.141 [2024-12-10 21:36:17.901251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.141 [2024-12-10 21:36:17.917811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.141 [2024-12-10 21:36:17.917845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:17.934276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:17.934313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:17.950820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:17.950869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:17.967105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:17.967141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 11395.00 IOPS, 89.02 MiB/s [2024-12-10T21:36:18.183Z] [2024-12-10 21:36:17.976806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:17.976838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:17.988472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:17.988505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:17.999269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:17.999302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.011494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.011527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.020789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.020822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.033864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.033897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.044486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.044518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.055154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.055191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.065931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.065964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.083120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.083155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.100651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.100688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.116206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.116240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.132398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.132466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.142292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.142334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.156999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.157032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.400 [2024-12-10 21:36:18.167198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.400 [2024-12-10 21:36:18.167231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.182053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.182088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.199693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.199730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.210604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.210652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.225106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.225141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.235654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.235689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.250564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.250598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.267881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.267917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.278295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.278331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.293321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.293357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.303847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.303880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.318986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.319022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.335956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.335991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.345243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.345275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.361036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.361072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.371150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.371185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.659 [2024-12-10 21:36:18.385362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.659 [2024-12-10 21:36:18.385396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.660 [2024-12-10 21:36:18.395392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.660 [2024-12-10 21:36:18.395426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.660 [2024-12-10 21:36:18.411189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.660 [2024-12-10 21:36:18.411223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.660 [2024-12-10 21:36:18.421647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.660 [2024-12-10 21:36:18.421698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.660 [2024-12-10 21:36:18.436257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.660 [2024-12-10 21:36:18.436291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.448476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.448514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.463999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.464049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.473664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.473710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.488352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.488389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.499067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.499101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.513682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.513715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.523960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.523995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.538724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.538764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.556198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.556236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.566831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.566863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.581639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.581673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.597741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.597775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.615265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.615302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.631187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.631223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.649348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.649383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.660059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.660093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.672905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.672941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.919 [2024-12-10 21:36:18.689408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.919 [2024-12-10 21:36:18.689458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.178 [2024-12-10 21:36:18.705854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.178 [2024-12-10 21:36:18.705894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.178 [2024-12-10 21:36:18.715817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.178 [2024-12-10 21:36:18.715855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.178 [2024-12-10 21:36:18.730550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.178 [2024-12-10 21:36:18.730588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.178 [2024-12-10 21:36:18.746421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.178 [2024-12-10 21:36:18.746468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.764704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.764744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.779579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.779623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.789538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.789573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.806317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.806352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.822901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.822934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.839809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.839846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.856976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.857012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.873145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.873180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.891470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.891503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.906207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.906241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.915546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.915579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.931473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.931503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.946791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.946824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.179 [2024-12-10 21:36:18.956342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.179 [2024-12-10 21:36:18.956376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 11431.00 IOPS, 89.30 MiB/s [2024-12-10T21:36:19.221Z] [2024-12-10 21:36:18.971841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:18.971876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:18.981415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:18.981461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:18.994722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:18.994754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.009352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.009384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.019062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.019094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.031082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.031114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.046219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.046254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.055828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.055860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.071588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.071620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.081851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.081884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.096855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.096888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.106774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.106805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.121522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.121556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.132291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.132324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.143229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.143262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.160406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.160457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.177988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.178026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.188312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.188347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.203162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.438 [2024-12-10 21:36:19.203201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.438 [2024-12-10 21:36:19.219161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.439 [2024-12-10 21:36:19.219199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.697 [2024-12-10 21:36:19.237981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.697 [2024-12-10 21:36:19.238025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.252989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.253027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.270631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.270672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.286087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.286131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.295631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.295665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.309063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.309100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.323665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.323699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.339771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.339805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.356864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.356898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.367102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.367136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.381761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.381794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.391885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.391916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.406749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.406786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.417510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.417540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.432248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.432286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.449081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.449118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.698 [2024-12-10 21:36:19.465735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.698 [2024-12-10 21:36:19.465769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.480050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.480094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.497746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.497788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.508507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.508547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.519576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.519612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.530541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.530578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.547645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.547683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.564271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.564345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.581012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.581079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.597857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.597920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.615244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.615304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.631537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.631592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.647963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.648027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.658423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.658494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.673107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.673167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.689970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.690029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.705424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.705487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.715367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.715425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.957 [2024-12-10 21:36:19.731234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.957 [2024-12-10 21:36:19.731276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.216 [2024-12-10 21:36:19.740634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.216 [2024-12-10 21:36:19.740672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.216 [2024-12-10 21:36:19.756314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.216 [2024-12-10 21:36:19.756361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.216 [2024-12-10 21:36:19.766171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.216 [2024-12-10 21:36:19.766203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.781582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.781622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.797400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.797434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.806924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.806957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.820202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.820235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.835420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.835470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.852904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.852939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.863875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.863910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.878828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.878873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.895233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.895272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.911889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.911926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.929197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.929235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.939686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.939720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.950662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.950694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.962981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.963015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 11429.40 IOPS, 89.29 MiB/s [2024-12-10T21:36:20.000Z] [2024-12-10 21:36:19.974094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.974129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 00:09:19.217 Latency(us) 00:09:19.217 [2024-12-10T21:36:20.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.217 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:19.217 Nvme1n1 : 5.01 11437.20 89.35 0.00 0.00 11181.42 4796.04 19184.17 00:09:19.217 [2024-12-10T21:36:20.000Z] =================================================================================================================== 00:09:19.217 [2024-12-10T21:36:20.000Z] Total : 11437.20 89.35 0.00 0.00 11181.42 4796.04 19184.17 00:09:19.217 [2024-12-10 21:36:19.982097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.982130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.217 [2024-12-10 21:36:19.994124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.217 [2024-12-10 21:36:19.994169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.006129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.006178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.018155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.018204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.030133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.030180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.042127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.042171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.054141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.054189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.066131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.066169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.078146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.078194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.090132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.090167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.102128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.102161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 [2024-12-10 21:36:20.114144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.476 [2024-12-10 21:36:20.114179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.476 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65498) - No such process 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65498 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.476 delay0 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.476 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.477 21:36:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:19.735 [2024-12-10 21:36:20.320658] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:26.314 Initializing NVMe Controllers 00:09:26.314 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:26.314 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:26.314 Initialization complete. Launching workers. 00:09:26.314 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 663 00:09:26.314 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 950, failed to submit 33 00:09:26.314 success 819, unsuccessful 131, failed 0 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.314 rmmod nvme_tcp 00:09:26.314 rmmod nvme_fabrics 00:09:26.314 rmmod nvme_keyring 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65343 ']' 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65343 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65343 ']' 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65343 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65343 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65343' 00:09:26.314 killing process with pid 65343 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65343 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65343 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.314 21:36:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:26.314 00:09:26.314 real 0m24.771s 00:09:26.314 user 0m40.444s 00:09:26.314 sys 0m6.612s 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.314 ************************************ 00:09:26.314 END TEST nvmf_zcopy 00:09:26.314 ************************************ 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.314 ************************************ 00:09:26.314 START TEST nvmf_nmic 00:09:26.314 ************************************ 00:09:26.314 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.574 * Looking for test storage... 00:09:26.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.574 --rc genhtml_branch_coverage=1 00:09:26.574 --rc genhtml_function_coverage=1 00:09:26.574 --rc genhtml_legend=1 00:09:26.574 --rc geninfo_all_blocks=1 00:09:26.574 --rc geninfo_unexecuted_blocks=1 00:09:26.574 00:09:26.574 ' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.574 --rc genhtml_branch_coverage=1 00:09:26.574 --rc genhtml_function_coverage=1 00:09:26.574 --rc genhtml_legend=1 00:09:26.574 --rc geninfo_all_blocks=1 00:09:26.574 --rc geninfo_unexecuted_blocks=1 00:09:26.574 00:09:26.574 ' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.574 --rc genhtml_branch_coverage=1 00:09:26.574 --rc genhtml_function_coverage=1 00:09:26.574 --rc genhtml_legend=1 00:09:26.574 --rc geninfo_all_blocks=1 00:09:26.574 --rc geninfo_unexecuted_blocks=1 00:09:26.574 00:09:26.574 ' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.574 --rc genhtml_branch_coverage=1 00:09:26.574 --rc genhtml_function_coverage=1 00:09:26.574 --rc genhtml_legend=1 00:09:26.574 --rc geninfo_all_blocks=1 00:09:26.574 --rc geninfo_unexecuted_blocks=1 00:09:26.574 00:09:26.574 ' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.574 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.575 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:26.575 Cannot find device "nvmf_init_br" 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:26.575 Cannot find device "nvmf_init_br2" 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:26.575 Cannot find device "nvmf_tgt_br" 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.575 Cannot find device "nvmf_tgt_br2" 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:26.575 Cannot find device "nvmf_init_br" 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:26.575 Cannot find device "nvmf_init_br2" 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:26.575 Cannot find device "nvmf_tgt_br" 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:26.575 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:26.835 Cannot find device "nvmf_tgt_br2" 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:26.835 Cannot find device "nvmf_br" 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:26.835 Cannot find device "nvmf_init_if" 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:26.835 Cannot find device "nvmf_init_if2" 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:26.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:26.835 00:09:26.835 --- 10.0.0.3 ping statistics --- 00:09:26.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.835 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:26.835 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:26.835 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:09:26.835 00:09:26.835 --- 10.0.0.4 ping statistics --- 00:09:26.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.835 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:26.835 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:09:27.094 00:09:27.094 --- 10.0.0.1 ping statistics --- 00:09:27.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.094 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:27.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:09:27.094 00:09:27.094 --- 10.0.0.2 ping statistics --- 00:09:27.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.094 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65885 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65885 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65885 ']' 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.094 21:36:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.094 [2024-12-10 21:36:27.715674] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:09:27.094 [2024-12-10 21:36:27.715792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.094 [2024-12-10 21:36:27.871853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.353 [2024-12-10 21:36:27.912773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.353 [2024-12-10 21:36:27.912833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.353 [2024-12-10 21:36:27.912847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.353 [2024-12-10 21:36:27.912857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.353 [2024-12-10 21:36:27.912865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.353 [2024-12-10 21:36:27.913780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.353 [2024-12-10 21:36:27.913924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.353 [2024-12-10 21:36:27.914035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.353 [2024-12-10 21:36:27.914034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.353 [2024-12-10 21:36:27.946353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 [2024-12-10 21:36:28.061688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 Malloc0 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 [2024-12-10 21:36:28.121364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:27.353 test case1: single bdev can't be used in multiple subsystems 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.612 [2024-12-10 21:36:28.145208] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:27.612 [2024-12-10 21:36:28.145278] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:27.612 [2024-12-10 21:36:28.145313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.612 request: 00:09:27.612 { 00:09:27.612 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:27.612 "namespace": { 00:09:27.612 "bdev_name": "Malloc0", 00:09:27.612 "no_auto_visible": false, 00:09:27.612 "hide_metadata": false 00:09:27.612 }, 00:09:27.612 "method": "nvmf_subsystem_add_ns", 00:09:27.612 "req_id": 1 00:09:27.612 } 00:09:27.612 Got JSON-RPC error response 00:09:27.612 response: 00:09:27.612 { 00:09:27.612 "code": -32602, 00:09:27.612 "message": "Invalid parameters" 00:09:27.612 } 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:27.612 Adding namespace failed - expected result. 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:27.612 test case2: host connect to nvmf target in multiple paths 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.612 [2024-12-10 21:36:28.157334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:27.612 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:27.871 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.871 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:27.871 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.871 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:27.871 21:36:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:29.808 21:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:29.808 21:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:29.808 21:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.808 21:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:29.808 21:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.808 21:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:29.808 21:36:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:29.808 [global] 00:09:29.808 thread=1 00:09:29.808 invalidate=1 00:09:29.808 rw=write 00:09:29.808 time_based=1 00:09:29.808 runtime=1 00:09:29.808 ioengine=libaio 00:09:29.808 direct=1 00:09:29.808 bs=4096 00:09:29.808 iodepth=1 00:09:29.808 norandommap=0 00:09:29.808 numjobs=1 00:09:29.808 00:09:29.808 verify_dump=1 00:09:29.808 verify_backlog=512 00:09:29.808 verify_state_save=0 00:09:29.808 do_verify=1 00:09:29.808 verify=crc32c-intel 00:09:29.808 [job0] 00:09:29.808 filename=/dev/nvme0n1 00:09:29.808 Could not set queue depth (nvme0n1) 00:09:30.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.067 fio-3.35 00:09:30.067 Starting 1 thread 00:09:31.003 00:09:31.003 job0: (groupid=0, jobs=1): err= 0: pid=65958: Tue Dec 10 21:36:31 2024 00:09:31.003 read: IOPS=2873, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec) 00:09:31.003 slat (usec): min=13, max=115, avg=16.48, stdev= 5.06 00:09:31.003 clat (usec): min=141, max=5959, avg=181.63, stdev=148.95 00:09:31.003 lat (usec): min=157, max=5991, avg=198.11, stdev=149.62 00:09:31.003 clat percentiles (usec): 00:09:31.003 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:31.003 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:09:31.003 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:09:31.003 | 99.00th=[ 243], 99.50th=[ 285], 99.90th=[ 3654], 99.95th=[ 3916], 00:09:31.003 | 99.99th=[ 5932] 00:09:31.003 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:31.003 slat (usec): min=15, max=125, avg=24.05, stdev= 6.92 00:09:31.003 clat (usec): min=85, max=7297, avg=112.36, stdev=131.86 00:09:31.003 lat (usec): min=104, max=7331, avg=136.41, stdev=132.52 00:09:31.003 clat percentiles (usec): 00:09:31.003 | 1.00th=[ 90], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 98], 00:09:31.003 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:09:31.003 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 129], 95.00th=[ 137], 00:09:31.003 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 578], 99.95th=[ 766], 00:09:31.003 | 99.99th=[ 7308] 00:09:31.003 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:09:31.003 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:31.003 lat (usec) : 100=13.43%, 250=86.01%, 500=0.40%, 750=0.02%, 1000=0.03% 00:09:31.003 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03% 00:09:31.003 cpu : usr=1.70%, sys=10.50%, ctx=5949, majf=0, minf=5 00:09:31.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.003 issued rwts: total=2876,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.003 00:09:31.003 Run status group 0 (all jobs): 00:09:31.003 READ: bw=11.2MiB/s (11.8MB/s), 11.2MiB/s-11.2MiB/s (11.8MB/s-11.8MB/s), io=11.2MiB (11.8MB), run=1001-1001msec 00:09:31.003 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:31.003 00:09:31.003 Disk stats (read/write): 00:09:31.003 nvme0n1: ios=2610/2778, merge=0/0, ticks=473/342, in_queue=815, util=90.78% 00:09:31.003 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:31.261 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.262 rmmod nvme_tcp 00:09:31.262 rmmod nvme_fabrics 00:09:31.262 rmmod nvme_keyring 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65885 ']' 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65885 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65885 ']' 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65885 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65885 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.262 killing process with pid 65885 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65885' 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65885 00:09:31.262 21:36:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65885 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:31.521 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:31.780 00:09:31.780 real 0m5.338s 00:09:31.780 user 0m15.635s 00:09:31.780 sys 0m2.297s 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 ************************************ 00:09:31.780 END TEST nvmf_nmic 00:09:31.780 ************************************ 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.780 ************************************ 00:09:31.780 START TEST nvmf_fio_target 00:09:31.780 ************************************ 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:31.780 * Looking for test storage... 00:09:31.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:31.780 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.039 --rc genhtml_branch_coverage=1 00:09:32.039 --rc genhtml_function_coverage=1 00:09:32.039 --rc genhtml_legend=1 00:09:32.039 --rc geninfo_all_blocks=1 00:09:32.039 --rc geninfo_unexecuted_blocks=1 00:09:32.039 00:09:32.039 ' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.039 --rc genhtml_branch_coverage=1 00:09:32.039 --rc genhtml_function_coverage=1 00:09:32.039 --rc genhtml_legend=1 00:09:32.039 --rc geninfo_all_blocks=1 00:09:32.039 --rc geninfo_unexecuted_blocks=1 00:09:32.039 00:09:32.039 ' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.039 --rc genhtml_branch_coverage=1 00:09:32.039 --rc genhtml_function_coverage=1 00:09:32.039 --rc genhtml_legend=1 00:09:32.039 --rc geninfo_all_blocks=1 00:09:32.039 --rc geninfo_unexecuted_blocks=1 00:09:32.039 00:09:32.039 ' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.039 --rc genhtml_branch_coverage=1 00:09:32.039 --rc genhtml_function_coverage=1 00:09:32.039 --rc genhtml_legend=1 00:09:32.039 --rc geninfo_all_blocks=1 00:09:32.039 --rc geninfo_unexecuted_blocks=1 00:09:32.039 00:09:32.039 ' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.039 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.040 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:32.040 Cannot find device "nvmf_init_br" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:32.040 Cannot find device "nvmf_init_br2" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:32.040 Cannot find device "nvmf_tgt_br" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.040 Cannot find device "nvmf_tgt_br2" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:32.040 Cannot find device "nvmf_init_br" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:32.040 Cannot find device "nvmf_init_br2" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:32.040 Cannot find device "nvmf_tgt_br" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:32.040 Cannot find device "nvmf_tgt_br2" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:32.040 Cannot find device "nvmf_br" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:32.040 Cannot find device "nvmf_init_if" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:32.040 Cannot find device "nvmf_init_if2" 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.040 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:32.299 21:36:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:32.299 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.299 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:32.299 00:09:32.299 --- 10.0.0.3 ping statistics --- 00:09:32.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.299 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:32.299 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:32.299 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:32.299 00:09:32.299 --- 10.0.0.4 ping statistics --- 00:09:32.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.299 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:32.299 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:32.299 00:09:32.299 --- 10.0.0.1 ping statistics --- 00:09:32.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.300 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:32.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:09:32.300 00:09:32.300 --- 10.0.0.2 ping statistics --- 00:09:32.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.300 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.300 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66196 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66196 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66196 ']' 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.558 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.558 [2024-12-10 21:36:33.154100] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:09:32.558 [2024-12-10 21:36:33.154187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.558 [2024-12-10 21:36:33.301162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.558 [2024-12-10 21:36:33.334233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.558 [2024-12-10 21:36:33.334297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.558 [2024-12-10 21:36:33.334308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.558 [2024-12-10 21:36:33.334316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.558 [2024-12-10 21:36:33.334323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.558 [2024-12-10 21:36:33.335073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.558 [2024-12-10 21:36:33.335247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.558 [2024-12-10 21:36:33.335134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.558 [2024-12-10 21:36:33.335241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.817 [2024-12-10 21:36:33.365128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.817 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.817 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:32.817 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.817 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.817 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.817 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.817 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.075 [2024-12-10 21:36:33.778262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.075 21:36:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.642 21:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:33.642 21:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.900 21:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:33.900 21:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.158 21:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:34.158 21:36:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.416 21:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:34.417 21:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:34.675 21:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.241 21:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:35.241 21:36:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.241 21:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:35.241 21:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.500 21:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:35.500 21:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:36.066 21:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.324 21:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.324 21:36:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:36.582 21:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.582 21:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.841 21:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:37.099 [2024-12-10 21:36:37.775365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:37.099 21:36:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:37.665 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:37.665 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:37.922 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:37.923 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:37.923 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.923 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:37.923 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:37.923 21:36:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:39.821 21:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:39.821 21:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:39.821 21:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.821 21:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:39.821 21:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.821 21:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:39.821 21:36:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.821 [global] 00:09:39.821 thread=1 00:09:39.821 invalidate=1 00:09:39.821 rw=write 00:09:39.821 time_based=1 00:09:39.821 runtime=1 00:09:39.821 ioengine=libaio 00:09:39.821 direct=1 00:09:39.821 bs=4096 00:09:39.821 iodepth=1 00:09:39.821 norandommap=0 00:09:39.821 numjobs=1 00:09:39.821 00:09:39.821 verify_dump=1 00:09:39.821 verify_backlog=512 00:09:39.821 verify_state_save=0 00:09:39.821 do_verify=1 00:09:39.821 verify=crc32c-intel 00:09:39.821 [job0] 00:09:39.821 filename=/dev/nvme0n1 00:09:39.821 [job1] 00:09:39.821 filename=/dev/nvme0n2 00:09:39.821 [job2] 00:09:39.821 filename=/dev/nvme0n3 00:09:39.821 [job3] 00:09:39.821 filename=/dev/nvme0n4 00:09:40.078 Could not set queue depth (nvme0n1) 00:09:40.078 Could not set queue depth (nvme0n2) 00:09:40.078 Could not set queue depth (nvme0n3) 00:09:40.078 Could not set queue depth (nvme0n4) 00:09:40.078 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.078 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.078 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.078 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.078 fio-3.35 00:09:40.078 Starting 4 threads 00:09:41.416 00:09:41.416 job0: (groupid=0, jobs=1): err= 0: pid=66378: Tue Dec 10 21:36:41 2024 00:09:41.416 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:41.416 slat (usec): min=12, max=111, avg=19.44, stdev= 6.27 00:09:41.416 clat (usec): min=138, max=351, avg=180.56, stdev=25.00 00:09:41.416 lat (usec): min=153, max=377, avg=199.99, stdev=26.00 00:09:41.416 clat percentiles (usec): 00:09:41.416 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:41.416 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:09:41.416 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 221], 95.00th=[ 235], 00:09:41.416 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 302], 00:09:41.416 | 99.99th=[ 351] 00:09:41.416 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:09:41.416 slat (usec): min=15, max=134, avg=28.87, stdev= 9.42 00:09:41.416 clat (usec): min=92, max=2016, avg=132.42, stdev=40.81 00:09:41.416 lat (usec): min=112, max=2036, avg=161.29, stdev=41.63 00:09:41.416 clat percentiles (usec): 00:09:41.416 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 119], 00:09:41.416 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 133], 00:09:41.416 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 167], 00:09:41.416 | 99.00th=[ 190], 99.50th=[ 202], 99.90th=[ 586], 99.95th=[ 594], 00:09:41.416 | 99.99th=[ 2024] 00:09:41.416 bw ( KiB/s): min=12288, max=12288, per=28.32%, avg=12288.00, stdev= 0.00, samples=1 00:09:41.416 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:41.416 lat (usec) : 100=0.15%, 250=98.87%, 500=0.93%, 750=0.04% 00:09:41.416 lat (msec) : 4=0.02% 00:09:41.416 cpu : usr=2.90%, sys=10.80%, ctx=5513, majf=0, minf=15 00:09:41.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.416 issued rwts: total=2560,2948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.416 job1: (groupid=0, jobs=1): err= 0: pid=66379: Tue Dec 10 21:36:41 2024 00:09:41.417 read: IOPS=2548, BW=9.95MiB/s (10.4MB/s)(9.96MiB/1001msec) 00:09:41.417 slat (nsec): min=9433, max=74183, avg=16673.24, stdev=4888.14 00:09:41.417 clat (usec): min=144, max=665, avg=197.83, stdev=41.94 00:09:41.417 lat (usec): min=160, max=686, avg=214.50, stdev=40.96 00:09:41.417 clat percentiles (usec): 00:09:41.417 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:41.417 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 192], 00:09:41.417 | 70.00th=[ 215], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 269], 00:09:41.417 | 99.00th=[ 293], 99.50th=[ 334], 99.90th=[ 523], 99.95th=[ 529], 00:09:41.417 | 99.99th=[ 668] 00:09:41.417 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:41.417 slat (usec): min=12, max=285, avg=26.09, stdev= 9.14 00:09:41.417 clat (usec): min=101, max=4474, avg=146.57, stdev=142.39 00:09:41.417 lat (usec): min=123, max=4510, avg=172.66, stdev=145.32 00:09:41.417 clat percentiles (usec): 00:09:41.417 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 120], 00:09:41.417 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 137], 00:09:41.417 | 70.00th=[ 145], 80.00th=[ 159], 90.00th=[ 188], 95.00th=[ 208], 00:09:41.417 | 99.00th=[ 237], 99.50th=[ 255], 99.90th=[ 3392], 99.95th=[ 4015], 00:09:41.417 | 99.99th=[ 4490] 00:09:41.417 bw ( KiB/s): min=12263, max=12263, per=28.26%, avg=12263.00, stdev= 0.00, samples=1 00:09:41.417 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:41.417 lat (usec) : 250=91.84%, 500=7.98%, 750=0.08% 00:09:41.417 lat (msec) : 2=0.02%, 4=0.04%, 10=0.04% 00:09:41.417 cpu : usr=2.80%, sys=8.30%, ctx=5115, majf=0, minf=11 00:09:41.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.417 issued rwts: total=2551,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.417 job2: (groupid=0, jobs=1): err= 0: pid=66380: Tue Dec 10 21:36:41 2024 00:09:41.417 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:41.417 slat (nsec): min=12680, max=57675, avg=17497.09, stdev=5207.79 00:09:41.417 clat (usec): min=152, max=2257, avg=189.70, stdev=49.40 00:09:41.417 lat (usec): min=166, max=2274, avg=207.19, stdev=49.88 00:09:41.417 clat percentiles (usec): 00:09:41.417 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:09:41.417 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:09:41.417 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 233], 00:09:41.417 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 635], 99.95th=[ 693], 00:09:41.417 | 99.99th=[ 2245] 00:09:41.417 write: IOPS=2772, BW=10.8MiB/s (11.4MB/s)(10.8MiB/1001msec); 0 zone resets 00:09:41.417 slat (nsec): min=16651, max=81664, avg=24422.14, stdev=6269.94 00:09:41.417 clat (usec): min=97, max=524, avg=141.04, stdev=19.82 00:09:41.417 lat (usec): min=131, max=545, avg=165.46, stdev=21.13 00:09:41.417 clat percentiles (usec): 00:09:41.417 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:09:41.417 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:09:41.417 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:09:41.417 | 99.00th=[ 190], 99.50th=[ 258], 99.90th=[ 379], 99.95th=[ 396], 00:09:41.417 | 99.99th=[ 529] 00:09:41.417 bw ( KiB/s): min=12288, max=12288, per=28.32%, avg=12288.00, stdev= 0.00, samples=1 00:09:41.417 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:41.417 lat (usec) : 100=0.02%, 250=98.71%, 500=1.16%, 750=0.09% 00:09:41.417 lat (msec) : 4=0.02% 00:09:41.417 cpu : usr=2.90%, sys=8.50%, ctx=5336, majf=0, minf=9 00:09:41.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.417 issued rwts: total=2560,2775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.417 job3: (groupid=0, jobs=1): err= 0: pid=66381: Tue Dec 10 21:36:41 2024 00:09:41.417 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:41.417 slat (nsec): min=9613, max=49871, avg=16382.78, stdev=4994.33 00:09:41.417 clat (usec): min=152, max=1618, avg=200.32, stdev=45.83 00:09:41.417 lat (usec): min=166, max=1633, avg=216.70, stdev=45.67 00:09:41.417 clat percentiles (usec): 00:09:41.417 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:41.417 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 194], 00:09:41.417 | 70.00th=[ 204], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 269], 00:09:41.417 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 537], 99.95th=[ 562], 00:09:41.417 | 99.99th=[ 1614] 00:09:41.417 write: IOPS=2574, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec); 0 zone resets 00:09:41.417 slat (nsec): min=15381, max=92626, avg=22822.26, stdev=5593.29 00:09:41.417 clat (usec): min=109, max=417, avg=146.11, stdev=22.65 00:09:41.417 lat (usec): min=131, max=445, avg=168.93, stdev=23.58 00:09:41.417 clat percentiles (usec): 00:09:41.417 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:09:41.417 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:09:41.417 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 178], 95.00th=[ 200], 00:09:41.417 | 99.00th=[ 223], 99.50th=[ 233], 99.90th=[ 245], 99.95th=[ 273], 00:09:41.417 | 99.99th=[ 416] 00:09:41.417 bw ( KiB/s): min=12263, max=12263, per=28.26%, avg=12263.00, stdev= 0.00, samples=1 00:09:41.417 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:41.417 lat (usec) : 250=93.09%, 500=6.85%, 750=0.04% 00:09:41.417 lat (msec) : 2=0.02% 00:09:41.417 cpu : usr=2.20%, sys=8.10%, ctx=5137, majf=0, minf=9 00:09:41.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.417 issued rwts: total=2560,2577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.417 00:09:41.417 Run status group 0 (all jobs): 00:09:41.417 READ: bw=39.9MiB/s (41.9MB/s), 9.95MiB/s-9.99MiB/s (10.4MB/s-10.5MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:09:41.417 WRITE: bw=42.4MiB/s (44.4MB/s), 9.99MiB/s-11.5MiB/s (10.5MB/s-12.1MB/s), io=42.4MiB (44.5MB), run=1001-1001msec 00:09:41.417 00:09:41.417 Disk stats (read/write): 00:09:41.417 nvme0n1: ios=2208/2560, merge=0/0, ticks=428/369, in_queue=797, util=88.28% 00:09:41.417 nvme0n2: ios=2097/2559, merge=0/0, ticks=438/364, in_queue=802, util=88.96% 00:09:41.417 nvme0n3: ios=2053/2560, merge=0/0, ticks=392/382, in_queue=774, util=89.26% 00:09:41.417 nvme0n4: ios=2065/2560, merge=0/0, ticks=393/399, in_queue=792, util=89.81% 00:09:41.417 21:36:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:41.417 [global] 00:09:41.417 thread=1 00:09:41.417 invalidate=1 00:09:41.417 rw=randwrite 00:09:41.417 time_based=1 00:09:41.417 runtime=1 00:09:41.417 ioengine=libaio 00:09:41.417 direct=1 00:09:41.417 bs=4096 00:09:41.417 iodepth=1 00:09:41.417 norandommap=0 00:09:41.417 numjobs=1 00:09:41.417 00:09:41.417 verify_dump=1 00:09:41.417 verify_backlog=512 00:09:41.417 verify_state_save=0 00:09:41.417 do_verify=1 00:09:41.417 verify=crc32c-intel 00:09:41.417 [job0] 00:09:41.417 filename=/dev/nvme0n1 00:09:41.417 [job1] 00:09:41.417 filename=/dev/nvme0n2 00:09:41.417 [job2] 00:09:41.417 filename=/dev/nvme0n3 00:09:41.418 [job3] 00:09:41.418 filename=/dev/nvme0n4 00:09:41.418 Could not set queue depth (nvme0n1) 00:09:41.418 Could not set queue depth (nvme0n2) 00:09:41.418 Could not set queue depth (nvme0n3) 00:09:41.418 Could not set queue depth (nvme0n4) 00:09:41.418 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.418 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.418 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.418 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.418 fio-3.35 00:09:41.418 Starting 4 threads 00:09:42.793 00:09:42.793 job0: (groupid=0, jobs=1): err= 0: pid=66440: Tue Dec 10 21:36:43 2024 00:09:42.793 read: IOPS=2992, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1000msec) 00:09:42.793 slat (usec): min=11, max=154, avg=14.36, stdev= 4.66 00:09:42.793 clat (usec): min=138, max=1929, avg=166.48, stdev=36.23 00:09:42.793 lat (usec): min=151, max=1942, avg=180.84, stdev=36.86 00:09:42.793 clat percentiles (usec): 00:09:42.793 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 155], 00:09:42.793 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:09:42.793 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:09:42.793 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 461], 99.95th=[ 553], 00:09:42.793 | 99.99th=[ 1926] 00:09:42.793 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:09:42.793 slat (usec): min=14, max=119, avg=20.78, stdev= 4.21 00:09:42.793 clat (usec): min=95, max=559, avg=125.24, stdev=13.96 00:09:42.793 lat (usec): min=114, max=586, avg=146.01, stdev=14.89 00:09:42.793 clat percentiles (usec): 00:09:42.793 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 116], 00:09:42.793 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:09:42.793 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:09:42.793 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 229], 99.95th=[ 249], 00:09:42.793 | 99.99th=[ 562] 00:09:42.793 bw ( KiB/s): min=12288, max=12288, per=27.40%, avg=12288.00, stdev= 0.00, samples=1 00:09:42.793 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:42.793 lat (usec) : 100=0.31%, 250=99.54%, 500=0.10%, 750=0.03% 00:09:42.793 lat (msec) : 2=0.02% 00:09:42.793 cpu : usr=2.00%, sys=8.80%, ctx=6065, majf=0, minf=9 00:09:42.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.793 issued rwts: total=2992,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.793 job1: (groupid=0, jobs=1): err= 0: pid=66441: Tue Dec 10 21:36:43 2024 00:09:42.793 read: IOPS=2170, BW=8683KiB/s (8892kB/s)(8692KiB/1001msec) 00:09:42.793 slat (nsec): min=10130, max=50920, avg=16337.29, stdev=4501.01 00:09:42.793 clat (usec): min=140, max=7314, avg=223.89, stdev=210.67 00:09:42.793 lat (usec): min=155, max=7341, avg=240.23, stdev=210.88 00:09:42.793 clat percentiles (usec): 00:09:42.793 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:42.793 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 190], 00:09:42.793 | 70.00th=[ 241], 80.00th=[ 260], 90.00th=[ 355], 95.00th=[ 388], 00:09:42.793 | 99.00th=[ 510], 99.50th=[ 627], 99.90th=[ 3720], 99.95th=[ 3752], 00:09:42.793 | 99.99th=[ 7308] 00:09:42.793 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:42.793 slat (usec): min=12, max=123, avg=21.61, stdev= 6.23 00:09:42.793 clat (usec): min=74, max=525, avg=161.70, stdev=47.48 00:09:42.793 lat (usec): min=117, max=552, avg=183.31, stdev=48.06 00:09:42.793 clat percentiles (usec): 00:09:42.793 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:09:42.793 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 141], 60.00th=[ 178], 00:09:42.793 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 225], 00:09:42.793 | 99.00th=[ 297], 99.50th=[ 416], 99.90th=[ 482], 99.95th=[ 506], 00:09:42.793 | 99.99th=[ 529] 00:09:42.793 bw ( KiB/s): min=12288, max=12288, per=27.40%, avg=12288.00, stdev= 0.00, samples=1 00:09:42.793 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:42.793 lat (usec) : 100=0.06%, 250=87.24%, 500=12.15%, 750=0.38% 00:09:42.793 lat (msec) : 2=0.08%, 4=0.06%, 10=0.02% 00:09:42.793 cpu : usr=2.20%, sys=7.30%, ctx=4740, majf=0, minf=11 00:09:42.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.793 issued rwts: total=2173,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.793 job2: (groupid=0, jobs=1): err= 0: pid=66442: Tue Dec 10 21:36:43 2024 00:09:42.793 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:42.793 slat (nsec): min=12228, max=41155, avg=14079.05, stdev=2383.74 00:09:42.793 clat (usec): min=150, max=665, avg=182.08, stdev=25.87 00:09:42.793 lat (usec): min=163, max=678, avg=196.16, stdev=26.25 00:09:42.793 clat percentiles (usec): 00:09:42.793 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:09:42.793 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:09:42.793 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 208], 00:09:42.793 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 478], 99.95th=[ 529], 00:09:42.793 | 99.99th=[ 668] 00:09:42.793 write: IOPS=3028, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:09:42.793 slat (usec): min=15, max=117, avg=22.72, stdev= 6.49 00:09:42.793 clat (usec): min=100, max=1751, avg=138.38, stdev=37.97 00:09:42.793 lat (usec): min=122, max=1780, avg=161.10, stdev=39.10 00:09:42.793 clat percentiles (usec): 00:09:42.793 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 127], 00:09:42.793 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:09:42.793 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:09:42.793 | 99.00th=[ 176], 99.50th=[ 198], 99.90th=[ 611], 99.95th=[ 725], 00:09:42.793 | 99.99th=[ 1745] 00:09:42.793 bw ( KiB/s): min=12288, max=12288, per=27.40%, avg=12288.00, stdev= 0.00, samples=1 00:09:42.793 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:42.793 lat (usec) : 250=98.53%, 500=1.36%, 750=0.09% 00:09:42.793 lat (msec) : 2=0.02% 00:09:42.793 cpu : usr=2.10%, sys=8.50%, ctx=5592, majf=0, minf=11 00:09:42.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.793 issued rwts: total=2560,3032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.793 job3: (groupid=0, jobs=1): err= 0: pid=66443: Tue Dec 10 21:36:43 2024 00:09:42.794 read: IOPS=2167, BW=8671KiB/s (8879kB/s)(8680KiB/1001msec) 00:09:42.794 slat (nsec): min=10071, max=54601, avg=16006.75, stdev=4623.92 00:09:42.794 clat (usec): min=144, max=1401, avg=218.20, stdev=76.38 00:09:42.794 lat (usec): min=159, max=1427, avg=234.21, stdev=77.64 00:09:42.794 clat percentiles (usec): 00:09:42.794 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:09:42.794 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 192], 00:09:42.794 | 70.00th=[ 241], 80.00th=[ 262], 90.00th=[ 351], 95.00th=[ 383], 00:09:42.794 | 99.00th=[ 461], 99.50th=[ 510], 99.90th=[ 644], 99.95th=[ 1020], 00:09:42.794 | 99.99th=[ 1401] 00:09:42.794 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:42.794 slat (usec): min=12, max=113, avg=21.80, stdev= 5.11 00:09:42.794 clat (usec): min=108, max=586, avg=166.91, stdev=43.97 00:09:42.794 lat (usec): min=132, max=635, avg=188.71, stdev=43.71 00:09:42.794 clat percentiles (usec): 00:09:42.794 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 133], 00:09:42.794 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 151], 60.00th=[ 176], 00:09:42.794 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 227], 00:09:42.794 | 99.00th=[ 302], 99.50th=[ 424], 99.90th=[ 502], 99.95th=[ 502], 00:09:42.794 | 99.99th=[ 586] 00:09:42.794 bw ( KiB/s): min=12288, max=12288, per=27.40%, avg=12288.00, stdev= 0.00, samples=1 00:09:42.794 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:42.794 lat (usec) : 250=86.93%, 500=12.77%, 750=0.25% 00:09:42.794 lat (msec) : 2=0.04% 00:09:42.794 cpu : usr=2.30%, sys=7.20%, ctx=4734, majf=0, minf=14 00:09:42.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.794 issued rwts: total=2170,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.794 00:09:42.794 Run status group 0 (all jobs): 00:09:42.794 READ: bw=38.6MiB/s (40.5MB/s), 8671KiB/s-11.7MiB/s (8879kB/s-12.3MB/s), io=38.7MiB (40.5MB), run=1000-1001msec 00:09:42.794 WRITE: bw=43.8MiB/s (45.9MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=43.8MiB (46.0MB), run=1000-1001msec 00:09:42.794 00:09:42.794 Disk stats (read/write): 00:09:42.794 nvme0n1: ios=2610/2616, merge=0/0, ticks=456/350, in_queue=806, util=87.47% 00:09:42.794 nvme0n2: ios=2091/2085, merge=0/0, ticks=480/314, in_queue=794, util=87.83% 00:09:42.794 nvme0n3: ios=2187/2560, merge=0/0, ticks=419/372, in_queue=791, util=88.89% 00:09:42.794 nvme0n4: ios=2048/2083, merge=0/0, ticks=437/338, in_queue=775, util=89.63% 00:09:42.794 21:36:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:42.794 [global] 00:09:42.794 thread=1 00:09:42.794 invalidate=1 00:09:42.794 rw=write 00:09:42.794 time_based=1 00:09:42.794 runtime=1 00:09:42.794 ioengine=libaio 00:09:42.794 direct=1 00:09:42.794 bs=4096 00:09:42.794 iodepth=128 00:09:42.794 norandommap=0 00:09:42.794 numjobs=1 00:09:42.794 00:09:42.794 verify_dump=1 00:09:42.794 verify_backlog=512 00:09:42.794 verify_state_save=0 00:09:42.794 do_verify=1 00:09:42.794 verify=crc32c-intel 00:09:42.794 [job0] 00:09:42.794 filename=/dev/nvme0n1 00:09:42.794 [job1] 00:09:42.794 filename=/dev/nvme0n2 00:09:42.794 [job2] 00:09:42.794 filename=/dev/nvme0n3 00:09:42.794 [job3] 00:09:42.794 filename=/dev/nvme0n4 00:09:42.794 Could not set queue depth (nvme0n1) 00:09:42.794 Could not set queue depth (nvme0n2) 00:09:42.794 Could not set queue depth (nvme0n3) 00:09:42.794 Could not set queue depth (nvme0n4) 00:09:42.794 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.794 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.794 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.794 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.794 fio-3.35 00:09:42.794 Starting 4 threads 00:09:44.170 00:09:44.170 job0: (groupid=0, jobs=1): err= 0: pid=66498: Tue Dec 10 21:36:44 2024 00:09:44.170 read: IOPS=2760, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1005msec) 00:09:44.170 slat (usec): min=4, max=13102, avg=180.28, stdev=768.83 00:09:44.170 clat (usec): min=1968, max=47589, avg=22480.39, stdev=9745.68 00:09:44.170 lat (usec): min=4632, max=47604, avg=22660.67, stdev=9808.70 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[11207], 20.00th=[11863], 00:09:44.170 | 30.00th=[12125], 40.00th=[15926], 50.00th=[25560], 60.00th=[27919], 00:09:44.170 | 70.00th=[28967], 80.00th=[30802], 90.00th=[34341], 95.00th=[36963], 00:09:44.170 | 99.00th=[45876], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:09:44.170 | 99.99th=[47449] 00:09:44.170 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:44.170 slat (usec): min=9, max=6891, avg=156.18, stdev=687.89 00:09:44.170 clat (usec): min=8367, max=43844, avg=21037.42, stdev=9673.47 00:09:44.170 lat (usec): min=8382, max=43862, avg=21193.59, stdev=9725.98 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11207], 20.00th=[11731], 00:09:44.170 | 30.00th=[11994], 40.00th=[13173], 50.00th=[17433], 60.00th=[26084], 00:09:44.170 | 70.00th=[27657], 80.00th=[30278], 90.00th=[34341], 95.00th=[38011], 00:09:44.170 | 99.00th=[41157], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:09:44.170 | 99.99th=[43779] 00:09:44.170 bw ( KiB/s): min= 8192, max=16416, per=22.95%, avg=12304.00, stdev=5815.25, samples=2 00:09:44.170 iops : min= 2048, max= 4104, avg=3076.00, stdev=1453.81, samples=2 00:09:44.170 lat (msec) : 2=0.02%, 10=2.02%, 20=44.27%, 50=53.69% 00:09:44.170 cpu : usr=2.49%, sys=8.27%, ctx=551, majf=0, minf=1 00:09:44.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:44.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.170 issued rwts: total=2774,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.170 job1: (groupid=0, jobs=1): err= 0: pid=66499: Tue Dec 10 21:36:44 2024 00:09:44.170 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:09:44.170 slat (usec): min=4, max=6625, avg=207.23, stdev=907.98 00:09:44.170 clat (usec): min=14338, max=46560, avg=25575.89, stdev=6848.06 00:09:44.170 lat (usec): min=14358, max=46582, avg=25783.12, stdev=6909.05 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[15401], 5.00th=[16909], 10.00th=[18744], 20.00th=[20055], 00:09:44.170 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22938], 60.00th=[25035], 00:09:44.170 | 70.00th=[28443], 80.00th=[32375], 90.00th=[36963], 95.00th=[39584], 00:09:44.170 | 99.00th=[40109], 99.50th=[42206], 99.90th=[44827], 99.95th=[46400], 00:09:44.170 | 99.99th=[46400] 00:09:44.170 write: IOPS=2220, BW=8883KiB/s (9096kB/s)(8936KiB/1006msec); 0 zone resets 00:09:44.170 slat (usec): min=17, max=9096, avg=251.67, stdev=822.45 00:09:44.170 clat (usec): min=3386, max=59212, avg=33177.38, stdev=9491.15 00:09:44.170 lat (usec): min=5828, max=59233, avg=33429.04, stdev=9539.96 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[10421], 5.00th=[19792], 10.00th=[19792], 20.00th=[21103], 00:09:44.170 | 30.00th=[28443], 40.00th=[33424], 50.00th=[35390], 60.00th=[36963], 00:09:44.170 | 70.00th=[38011], 80.00th=[39060], 90.00th=[43254], 95.00th=[49021], 00:09:44.170 | 99.00th=[55313], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:09:44.170 | 99.99th=[58983] 00:09:44.170 bw ( KiB/s): min= 8416, max= 8448, per=15.73%, avg=8432.00, stdev=22.63, samples=2 00:09:44.170 iops : min= 2104, max= 2112, avg=2108.00, stdev= 5.66, samples=2 00:09:44.170 lat (msec) : 4=0.02%, 10=0.37%, 20=14.83%, 50=82.30%, 100=2.48% 00:09:44.170 cpu : usr=2.29%, sys=4.98%, ctx=318, majf=0, minf=10 00:09:44.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:09:44.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.170 issued rwts: total=2048,2234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.170 job2: (groupid=0, jobs=1): err= 0: pid=66500: Tue Dec 10 21:36:44 2024 00:09:44.170 read: IOPS=5557, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1002msec) 00:09:44.170 slat (usec): min=5, max=3207, avg=85.94, stdev=397.30 00:09:44.170 clat (usec): min=310, max=13894, avg=11433.72, stdev=1215.64 00:09:44.170 lat (usec): min=2426, max=13911, avg=11519.66, stdev=1153.64 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[ 5669], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:09:44.170 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:44.170 | 70.00th=[11600], 80.00th=[12125], 90.00th=[12911], 95.00th=[13304], 00:09:44.170 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13829], 99.95th=[13829], 00:09:44.170 | 99.99th=[13960] 00:09:44.170 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:44.170 slat (usec): min=9, max=4621, avg=84.98, stdev=353.69 00:09:44.170 clat (usec): min=8157, max=15409, avg=11171.61, stdev=1050.20 00:09:44.170 lat (usec): min=9467, max=15436, avg=11256.59, stdev=997.25 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10290], 20.00th=[10421], 00:09:44.170 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:09:44.170 | 70.00th=[11338], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:09:44.170 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15401], 99.95th=[15401], 00:09:44.170 | 99.99th=[15401] 00:09:44.170 bw ( KiB/s): min=21512, max=23544, per=42.02%, avg=22528.00, stdev=1436.84, samples=2 00:09:44.170 iops : min= 5378, max= 5886, avg=5632.00, stdev=359.21, samples=2 00:09:44.170 lat (usec) : 500=0.01% 00:09:44.170 lat (msec) : 4=0.29%, 10=3.50%, 20=96.21% 00:09:44.170 cpu : usr=5.00%, sys=15.48%, ctx=352, majf=0, minf=1 00:09:44.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:44.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.170 issued rwts: total=5569,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.170 job3: (groupid=0, jobs=1): err= 0: pid=66501: Tue Dec 10 21:36:44 2024 00:09:44.170 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:09:44.170 slat (usec): min=7, max=13025, avg=234.89, stdev=939.72 00:09:44.170 clat (usec): min=13955, max=49896, avg=30734.20, stdev=6799.78 00:09:44.170 lat (usec): min=16887, max=50120, avg=30969.09, stdev=6791.76 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[16909], 5.00th=[19530], 10.00th=[23725], 20.00th=[27395], 00:09:44.170 | 30.00th=[28181], 40.00th=[28705], 50.00th=[28967], 60.00th=[29492], 00:09:44.170 | 70.00th=[31589], 80.00th=[35914], 90.00th=[41157], 95.00th=[45876], 00:09:44.170 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:09:44.170 | 99.99th=[50070] 00:09:44.170 write: IOPS=2531, BW=9.89MiB/s (10.4MB/s)(9.94MiB/1005msec); 0 zone resets 00:09:44.170 slat (usec): min=7, max=9020, avg=197.51, stdev=861.86 00:09:44.170 clat (usec): min=1876, max=37728, avg=24984.00, stdev=5766.33 00:09:44.170 lat (usec): min=4316, max=38222, avg=25181.51, stdev=5759.43 00:09:44.170 clat percentiles (usec): 00:09:44.170 | 1.00th=[10290], 5.00th=[15008], 10.00th=[15533], 20.00th=[19792], 00:09:44.170 | 30.00th=[23462], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:09:44.170 | 70.00th=[27395], 80.00th=[29492], 90.00th=[31589], 95.00th=[32637], 00:09:44.170 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:09:44.170 | 99.99th=[37487] 00:09:44.170 bw ( KiB/s): min= 8152, max=11198, per=18.05%, avg=9675.00, stdev=2153.85, samples=2 00:09:44.170 iops : min= 2038, max= 2799, avg=2418.50, stdev=538.11, samples=2 00:09:44.170 lat (msec) : 2=0.02%, 10=0.52%, 20=14.16%, 50=85.30% 00:09:44.170 cpu : usr=2.19%, sys=7.07%, ctx=460, majf=0, minf=3 00:09:44.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:09:44.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.170 issued rwts: total=2048,2544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.171 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.171 00:09:44.171 Run status group 0 (all jobs): 00:09:44.171 READ: bw=48.3MiB/s (50.6MB/s), 8143KiB/s-21.7MiB/s (8339kB/s-22.8MB/s), io=48.6MiB (50.9MB), run=1002-1006msec 00:09:44.171 WRITE: bw=52.3MiB/s (54.9MB/s), 8883KiB/s-22.0MiB/s (9096kB/s-23.0MB/s), io=52.7MiB (55.2MB), run=1002-1006msec 00:09:44.171 00:09:44.171 Disk stats (read/write): 00:09:44.171 nvme0n1: ios=2610/2742, merge=0/0, ticks=14815/12431, in_queue=27246, util=87.27% 00:09:44.171 nvme0n2: ios=1582/2048, merge=0/0, ticks=12704/22318, in_queue=35022, util=88.47% 00:09:44.171 nvme0n3: ios=4614/4992, merge=0/0, ticks=11819/11806, in_queue=23625, util=89.27% 00:09:44.171 nvme0n4: ios=1962/2048, merge=0/0, ticks=14149/11301, in_queue=25450, util=89.72% 00:09:44.171 21:36:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:44.171 [global] 00:09:44.171 thread=1 00:09:44.171 invalidate=1 00:09:44.171 rw=randwrite 00:09:44.171 time_based=1 00:09:44.171 runtime=1 00:09:44.171 ioengine=libaio 00:09:44.171 direct=1 00:09:44.171 bs=4096 00:09:44.171 iodepth=128 00:09:44.171 norandommap=0 00:09:44.171 numjobs=1 00:09:44.171 00:09:44.171 verify_dump=1 00:09:44.171 verify_backlog=512 00:09:44.171 verify_state_save=0 00:09:44.171 do_verify=1 00:09:44.171 verify=crc32c-intel 00:09:44.171 [job0] 00:09:44.171 filename=/dev/nvme0n1 00:09:44.171 [job1] 00:09:44.171 filename=/dev/nvme0n2 00:09:44.171 [job2] 00:09:44.171 filename=/dev/nvme0n3 00:09:44.171 [job3] 00:09:44.171 filename=/dev/nvme0n4 00:09:44.171 Could not set queue depth (nvme0n1) 00:09:44.171 Could not set queue depth (nvme0n2) 00:09:44.171 Could not set queue depth (nvme0n3) 00:09:44.171 Could not set queue depth (nvme0n4) 00:09:44.171 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.171 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.171 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.171 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.171 fio-3.35 00:09:44.171 Starting 4 threads 00:09:45.546 00:09:45.546 job0: (groupid=0, jobs=1): err= 0: pid=66554: Tue Dec 10 21:36:46 2024 00:09:45.546 read: IOPS=3237, BW=12.6MiB/s (13.3MB/s)(12.8MiB/1009msec) 00:09:45.546 slat (usec): min=6, max=9770, avg=141.28, stdev=931.25 00:09:45.546 clat (usec): min=2748, max=31590, avg=19426.37, stdev=2474.48 00:09:45.546 lat (usec): min=10619, max=38242, avg=19567.65, stdev=2508.54 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[11207], 5.00th=[13173], 10.00th=[18220], 20.00th=[19006], 00:09:45.546 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:09:45.546 | 70.00th=[19792], 80.00th=[20317], 90.00th=[21103], 95.00th=[21890], 00:09:45.546 | 99.00th=[29754], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:09:45.546 | 99.99th=[31589] 00:09:45.546 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:09:45.546 slat (usec): min=9, max=13123, avg=143.21, stdev=907.86 00:09:45.546 clat (usec): min=9441, max=25154, avg=17952.04, stdev=1712.90 00:09:45.546 lat (usec): min=13095, max=25401, avg=18095.24, stdev=1514.98 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[11207], 5.00th=[16188], 10.00th=[16581], 20.00th=[17171], 00:09:45.546 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:09:45.546 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[20055], 00:09:45.546 | 99.00th=[23725], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:09:45.546 | 99.99th=[25035] 00:09:45.546 bw ( KiB/s): min=14344, max=14356, per=27.19%, avg=14350.00, stdev= 8.49, samples=2 00:09:45.546 iops : min= 3586, max= 3589, avg=3587.50, stdev= 2.12, samples=2 00:09:45.546 lat (msec) : 4=0.01%, 10=0.09%, 20=85.83%, 50=14.07% 00:09:45.546 cpu : usr=2.48%, sys=11.81%, ctx=176, majf=0, minf=13 00:09:45.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:45.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.546 issued rwts: total=3267,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.546 job1: (groupid=0, jobs=1): err= 0: pid=66555: Tue Dec 10 21:36:46 2024 00:09:45.546 read: IOPS=3237, BW=12.6MiB/s (13.3MB/s)(12.8MiB/1009msec) 00:09:45.546 slat (usec): min=9, max=9419, avg=141.16, stdev=928.66 00:09:45.546 clat (usec): min=2729, max=31677, avg=19421.46, stdev=2466.21 00:09:45.546 lat (usec): min=10457, max=37813, avg=19562.62, stdev=2499.17 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[11207], 5.00th=[13173], 10.00th=[18482], 20.00th=[19006], 00:09:45.546 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:09:45.546 | 70.00th=[19792], 80.00th=[20317], 90.00th=[20841], 95.00th=[21890], 00:09:45.546 | 99.00th=[29754], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:09:45.546 | 99.99th=[31589] 00:09:45.546 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:09:45.546 slat (usec): min=7, max=14139, avg=143.51, stdev=921.94 00:09:45.546 clat (usec): min=9262, max=25203, avg=17984.36, stdev=1781.94 00:09:45.546 lat (usec): min=12905, max=25399, avg=18127.88, stdev=1585.24 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[11076], 5.00th=[16188], 10.00th=[16581], 20.00th=[17171], 00:09:45.546 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:09:45.546 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19530], 95.00th=[20055], 00:09:45.546 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25035], 99.95th=[25297], 00:09:45.546 | 99.99th=[25297] 00:09:45.546 bw ( KiB/s): min=14344, max=14356, per=27.19%, avg=14350.00, stdev= 8.49, samples=2 00:09:45.546 iops : min= 3586, max= 3589, avg=3587.50, stdev= 2.12, samples=2 00:09:45.546 lat (msec) : 4=0.01%, 10=0.09%, 20=85.30%, 50=14.60% 00:09:45.546 cpu : usr=2.58%, sys=11.51%, ctx=137, majf=0, minf=17 00:09:45.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:45.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.546 issued rwts: total=3267,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.546 job2: (groupid=0, jobs=1): err= 0: pid=66559: Tue Dec 10 21:36:46 2024 00:09:45.546 read: IOPS=2130, BW=8520KiB/s (8725kB/s)(8580KiB/1007msec) 00:09:45.546 slat (usec): min=10, max=13489, avg=247.28, stdev=1432.00 00:09:45.546 clat (usec): min=1019, max=54767, avg=30558.66, stdev=9866.72 00:09:45.546 lat (usec): min=8102, max=54791, avg=30805.95, stdev=9852.33 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[ 8455], 5.00th=[17433], 10.00th=[22414], 20.00th=[23200], 00:09:45.546 | 30.00th=[25822], 40.00th=[26870], 50.00th=[28181], 60.00th=[28967], 00:09:45.546 | 70.00th=[32637], 80.00th=[38011], 90.00th=[48497], 95.00th=[52167], 00:09:45.546 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:09:45.546 | 99.99th=[54789] 00:09:45.546 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:09:45.546 slat (usec): min=11, max=11329, avg=177.77, stdev=959.30 00:09:45.546 clat (usec): min=9821, max=50545, avg=23538.28, stdev=8101.43 00:09:45.546 lat (usec): min=11961, max=50595, avg=23716.05, stdev=8072.21 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[11994], 5.00th=[12649], 10.00th=[15533], 20.00th=[16581], 00:09:45.546 | 30.00th=[19530], 40.00th=[20317], 50.00th=[21103], 60.00th=[21890], 00:09:45.546 | 70.00th=[26870], 80.00th=[31851], 90.00th=[33817], 95.00th=[36439], 00:09:45.546 | 99.00th=[50070], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:09:45.546 | 99.99th=[50594] 00:09:45.546 bw ( KiB/s): min= 8985, max=11264, per=19.18%, avg=10124.50, stdev=1611.50, samples=2 00:09:45.546 iops : min= 2246, max= 2816, avg=2531.00, stdev=403.05, samples=2 00:09:45.546 lat (msec) : 2=0.02%, 10=0.74%, 20=20.02%, 50=76.00%, 100=3.21% 00:09:45.546 cpu : usr=2.29%, sys=7.06%, ctx=148, majf=0, minf=11 00:09:45.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:45.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.546 issued rwts: total=2145,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.546 job3: (groupid=0, jobs=1): err= 0: pid=66562: Tue Dec 10 21:36:46 2024 00:09:45.546 read: IOPS=3420, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1009msec) 00:09:45.546 slat (usec): min=8, max=19560, avg=160.91, stdev=1175.21 00:09:45.546 clat (usec): min=1692, max=40103, avg=21972.17, stdev=4116.23 00:09:45.546 lat (usec): min=9112, max=42565, avg=22133.09, stdev=4205.93 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[ 9634], 5.00th=[15664], 10.00th=[19006], 20.00th=[19530], 00:09:45.546 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20317], 60.00th=[22152], 00:09:45.546 | 70.00th=[24511], 80.00th=[26870], 90.00th=[27919], 95.00th=[28181], 00:09:45.546 | 99.00th=[29230], 99.50th=[30016], 99.90th=[39060], 99.95th=[39584], 00:09:45.546 | 99.99th=[40109] 00:09:45.546 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:09:45.546 slat (usec): min=6, max=10881, avg=120.40, stdev=780.40 00:09:45.546 clat (usec): min=5708, max=28007, avg=14518.61, stdev=2021.49 00:09:45.546 lat (usec): min=10133, max=28031, avg=14639.01, stdev=1904.64 00:09:45.546 clat percentiles (usec): 00:09:45.546 | 1.00th=[ 9372], 5.00th=[12387], 10.00th=[12518], 20.00th=[12911], 00:09:45.546 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14091], 60.00th=[15139], 00:09:45.546 | 70.00th=[15664], 80.00th=[16188], 90.00th=[16909], 95.00th=[17433], 00:09:45.546 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21627], 99.95th=[27132], 00:09:45.546 | 99.99th=[27919] 00:09:45.546 bw ( KiB/s): min=13320, max=15352, per=27.17%, avg=14336.00, stdev=1436.84, samples=2 00:09:45.546 iops : min= 3330, max= 3838, avg=3584.00, stdev=359.21, samples=2 00:09:45.546 lat (msec) : 2=0.01%, 10=1.42%, 20=71.36%, 50=27.21% 00:09:45.546 cpu : usr=2.18%, sys=7.24%, ctx=151, majf=0, minf=11 00:09:45.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:45.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.546 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.546 00:09:45.546 Run status group 0 (all jobs): 00:09:45.546 READ: bw=47.0MiB/s (49.2MB/s), 8520KiB/s-13.4MiB/s (8725kB/s-14.0MB/s), io=47.4MiB (49.7MB), run=1007-1009msec 00:09:45.546 WRITE: bw=51.5MiB/s (54.0MB/s), 9.93MiB/s-13.9MiB/s (10.4MB/s-14.5MB/s), io=52.0MiB (54.5MB), run=1007-1009msec 00:09:45.546 00:09:45.546 Disk stats (read/write): 00:09:45.546 nvme0n1: ios=2810/3072, merge=0/0, ticks=51255/51448, in_queue=102703, util=88.58% 00:09:45.546 nvme0n2: ios=2809/3072, merge=0/0, ticks=51293/51667, in_queue=102960, util=89.39% 00:09:45.546 nvme0n3: ios=1958/2048, merge=0/0, ticks=15739/10206, in_queue=25945, util=89.53% 00:09:45.546 nvme0n4: ios=2965/3072, merge=0/0, ticks=63508/41500, in_queue=105008, util=90.30% 00:09:45.546 21:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:45.546 21:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66581 00:09:45.546 21:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:45.546 21:36:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:45.546 [global] 00:09:45.546 thread=1 00:09:45.546 invalidate=1 00:09:45.546 rw=read 00:09:45.546 time_based=1 00:09:45.546 runtime=10 00:09:45.546 ioengine=libaio 00:09:45.546 direct=1 00:09:45.546 bs=4096 00:09:45.546 iodepth=1 00:09:45.546 norandommap=1 00:09:45.546 numjobs=1 00:09:45.546 00:09:45.546 [job0] 00:09:45.546 filename=/dev/nvme0n1 00:09:45.546 [job1] 00:09:45.546 filename=/dev/nvme0n2 00:09:45.546 [job2] 00:09:45.546 filename=/dev/nvme0n3 00:09:45.546 [job3] 00:09:45.546 filename=/dev/nvme0n4 00:09:45.546 Could not set queue depth (nvme0n1) 00:09:45.546 Could not set queue depth (nvme0n2) 00:09:45.546 Could not set queue depth (nvme0n3) 00:09:45.546 Could not set queue depth (nvme0n4) 00:09:45.546 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.547 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.547 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.547 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.547 fio-3.35 00:09:45.547 Starting 4 threads 00:09:48.829 21:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:48.830 fio: pid=66624, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.830 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=60928000, buflen=4096 00:09:48.830 21:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:49.088 fio: pid=66623, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.088 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=58290176, buflen=4096 00:09:49.088 21:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.088 21:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:49.346 fio: pid=66621, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.347 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=57081856, buflen=4096 00:09:49.347 21:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.347 21:36:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:49.605 fio: pid=66622, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.605 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2441216, buflen=4096 00:09:49.605 00:09:49.605 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66621: Tue Dec 10 21:36:50 2024 00:09:49.605 read: IOPS=3962, BW=15.5MiB/s (16.2MB/s)(54.4MiB/3517msec) 00:09:49.605 slat (usec): min=8, max=12359, avg=18.00, stdev=163.81 00:09:49.605 clat (usec): min=128, max=5276, avg=232.82, stdev=86.03 00:09:49.605 lat (usec): min=147, max=12865, avg=250.82, stdev=186.98 00:09:49.605 clat percentiles (usec): 00:09:49.605 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:09:49.605 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:09:49.605 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 310], 00:09:49.605 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 979], 99.95th=[ 2008], 00:09:49.605 | 99.99th=[ 4178] 00:09:49.605 bw ( KiB/s): min=14064, max=17824, per=25.55%, avg=16090.67, stdev=1649.39, samples=6 00:09:49.605 iops : min= 3516, max= 4456, avg=4022.67, stdev=412.35, samples=6 00:09:49.605 lat (usec) : 250=77.58%, 500=22.19%, 750=0.11%, 1000=0.04% 00:09:49.605 lat (msec) : 2=0.04%, 4=0.03%, 10=0.02% 00:09:49.605 cpu : usr=1.17%, sys=5.72%, ctx=13959, majf=0, minf=1 00:09:49.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.605 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.605 issued rwts: total=13937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.605 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66622: Tue Dec 10 21:36:50 2024 00:09:49.605 read: IOPS=4454, BW=17.4MiB/s (18.2MB/s)(66.3MiB/3812msec) 00:09:49.605 slat (usec): min=8, max=12585, avg=19.89, stdev=190.73 00:09:49.605 clat (usec): min=3, max=6817, avg=203.00, stdev=130.51 00:09:49.605 lat (usec): min=143, max=12961, avg=222.89, stdev=232.80 00:09:49.605 clat percentiles (usec): 00:09:49.605 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:49.605 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 186], 00:09:49.605 | 70.00th=[ 221], 80.00th=[ 245], 90.00th=[ 269], 95.00th=[ 306], 00:09:49.605 | 99.00th=[ 351], 99.50th=[ 388], 99.90th=[ 1860], 99.95th=[ 3195], 00:09:49.605 | 99.99th=[ 6259] 00:09:49.605 bw ( KiB/s): min=14256, max=21752, per=28.16%, avg=17733.29, stdev=3387.20, samples=7 00:09:49.605 iops : min= 3564, max= 5438, avg=4433.29, stdev=846.84, samples=7 00:09:49.605 lat (usec) : 4=0.01%, 250=82.88%, 500=16.79%, 750=0.10%, 1000=0.04% 00:09:49.605 lat (msec) : 2=0.09%, 4=0.06%, 10=0.03% 00:09:49.605 cpu : usr=1.73%, sys=6.25%, ctx=17004, majf=0, minf=2 00:09:49.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.605 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.605 issued rwts: total=16981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.605 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66623: Tue Dec 10 21:36:50 2024 00:09:49.605 read: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(55.6MiB/3251msec) 00:09:49.605 slat (usec): min=8, max=12223, avg=25.91, stdev=121.09 00:09:49.605 clat (usec): min=148, max=5320, avg=200.19, stdev=77.70 00:09:49.605 lat (usec): min=162, max=12507, avg=226.11, stdev=144.34 00:09:49.605 clat percentiles (usec): 00:09:49.605 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:09:49.605 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:09:49.605 | 70.00th=[ 196], 80.00th=[ 208], 90.00th=[ 255], 95.00th=[ 281], 00:09:49.605 | 99.00th=[ 355], 99.50th=[ 408], 99.90th=[ 742], 99.95th=[ 1156], 00:09:49.606 | 99.99th=[ 3261] 00:09:49.606 bw ( KiB/s): min=16304, max=19256, per=28.46%, avg=17924.00, stdev=988.73, samples=6 00:09:49.606 iops : min= 4076, max= 4814, avg=4481.00, stdev=247.18, samples=6 00:09:49.606 lat (usec) : 250=88.93%, 500=10.76%, 750=0.21%, 1000=0.04% 00:09:49.606 lat (msec) : 2=0.04%, 4=0.02%, 10=0.01% 00:09:49.606 cpu : usr=2.15%, sys=9.66%, ctx=14243, majf=0, minf=1 00:09:49.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.606 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.606 issued rwts: total=14232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.606 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66624: Tue Dec 10 21:36:50 2024 00:09:49.606 read: IOPS=5066, BW=19.8MiB/s (20.8MB/s)(58.1MiB/2936msec) 00:09:49.606 slat (usec): min=12, max=105, avg=14.94, stdev= 3.31 00:09:49.606 clat (usec): min=119, max=2526, avg=180.98, stdev=28.76 00:09:49.606 lat (usec): min=156, max=2555, avg=195.92, stdev=29.11 00:09:49.606 clat percentiles (usec): 00:09:49.606 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:09:49.606 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:09:49.606 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 210], 00:09:49.606 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 314], 99.95th=[ 449], 00:09:49.606 | 99.99th=[ 1205] 00:09:49.606 bw ( KiB/s): min=19936, max=20456, per=32.17%, avg=20264.00, stdev=195.06, samples=5 00:09:49.606 iops : min= 4984, max= 5114, avg=5066.00, stdev=48.76, samples=5 00:09:49.606 lat (usec) : 250=99.30%, 500=0.65%, 750=0.01%, 1000=0.01% 00:09:49.606 lat (msec) : 2=0.01%, 4=0.01% 00:09:49.606 cpu : usr=1.40%, sys=6.64%, ctx=14877, majf=0, minf=2 00:09:49.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.606 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.606 issued rwts: total=14876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.606 00:09:49.606 Run status group 0 (all jobs): 00:09:49.606 READ: bw=61.5MiB/s (64.5MB/s), 15.5MiB/s-19.8MiB/s (16.2MB/s-20.8MB/s), io=234MiB (246MB), run=2936-3812msec 00:09:49.606 00:09:49.606 Disk stats (read/write): 00:09:49.606 nvme0n1: ios=13273/0, merge=0/0, ticks=3026/0, in_queue=3026, util=95.11% 00:09:49.606 nvme0n2: ios=15913/0, merge=0/0, ticks=3221/0, in_queue=3221, util=94.68% 00:09:49.606 nvme0n3: ios=13763/0, merge=0/0, ticks=2764/0, in_queue=2764, util=96.14% 00:09:49.606 nvme0n4: ios=14504/0, merge=0/0, ticks=2670/0, in_queue=2670, util=96.75% 00:09:49.606 21:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.606 21:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:49.864 21:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.864 21:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:50.123 21:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.123 21:36:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:50.689 21:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.689 21:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:51.256 21:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.256 21:36:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66581 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:51.514 nvmf hotplug test: fio failed as expected 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:51.514 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:51.772 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.773 rmmod nvme_tcp 00:09:51.773 rmmod nvme_fabrics 00:09:51.773 rmmod nvme_keyring 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66196 ']' 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66196 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66196 ']' 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66196 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.773 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66196 00:09:52.032 killing process with pid 66196 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66196' 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66196 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66196 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:52.032 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:52.321 00:09:52.321 real 0m20.517s 00:09:52.321 user 1m17.116s 00:09:52.321 sys 0m10.905s 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.321 ************************************ 00:09:52.321 END TEST nvmf_fio_target 00:09:52.321 ************************************ 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.321 21:36:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.321 ************************************ 00:09:52.321 START TEST nvmf_bdevio 00:09:52.321 ************************************ 00:09:52.321 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:52.321 * Looking for test storage... 00:09:52.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.321 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.321 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.321 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.580 --rc genhtml_branch_coverage=1 00:09:52.580 --rc genhtml_function_coverage=1 00:09:52.580 --rc genhtml_legend=1 00:09:52.580 --rc geninfo_all_blocks=1 00:09:52.580 --rc geninfo_unexecuted_blocks=1 00:09:52.580 00:09:52.580 ' 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.580 --rc genhtml_branch_coverage=1 00:09:52.580 --rc genhtml_function_coverage=1 00:09:52.580 --rc genhtml_legend=1 00:09:52.580 --rc geninfo_all_blocks=1 00:09:52.580 --rc geninfo_unexecuted_blocks=1 00:09:52.580 00:09:52.580 ' 00:09:52.580 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.580 --rc genhtml_branch_coverage=1 00:09:52.580 --rc genhtml_function_coverage=1 00:09:52.580 --rc genhtml_legend=1 00:09:52.580 --rc geninfo_all_blocks=1 00:09:52.580 --rc geninfo_unexecuted_blocks=1 00:09:52.580 00:09:52.581 ' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.581 --rc genhtml_branch_coverage=1 00:09:52.581 --rc genhtml_function_coverage=1 00:09:52.581 --rc genhtml_legend=1 00:09:52.581 --rc geninfo_all_blocks=1 00:09:52.581 --rc geninfo_unexecuted_blocks=1 00:09:52.581 00:09:52.581 ' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.581 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:52.581 Cannot find device "nvmf_init_br" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:52.581 Cannot find device "nvmf_init_br2" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:52.581 Cannot find device "nvmf_tgt_br" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.581 Cannot find device "nvmf_tgt_br2" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:52.581 Cannot find device "nvmf_init_br" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:52.581 Cannot find device "nvmf_init_br2" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:52.581 Cannot find device "nvmf_tgt_br" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:52.581 Cannot find device "nvmf_tgt_br2" 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:52.581 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:52.581 Cannot find device "nvmf_br" 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:52.582 Cannot find device "nvmf_init_if" 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:52.582 Cannot find device "nvmf_init_if2" 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.582 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:52.840 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.840 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:52.840 00:09:52.840 --- 10.0.0.3 ping statistics --- 00:09:52.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.840 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:52.840 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:52.840 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:52.840 00:09:52.840 --- 10.0.0.4 ping statistics --- 00:09:52.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.840 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:52.840 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:09:52.841 00:09:52.841 --- 10.0.0.1 ping statistics --- 00:09:52.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.841 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:52.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:09:52.841 00:09:52.841 --- 10.0.0.2 ping statistics --- 00:09:52.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.841 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66943 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66943 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66943 ']' 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.841 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.099 [2024-12-10 21:36:53.670030] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:09:53.099 [2024-12-10 21:36:53.670114] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.099 [2024-12-10 21:36:53.820478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.099 [2024-12-10 21:36:53.853332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.099 [2024-12-10 21:36:53.853390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.099 [2024-12-10 21:36:53.853401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.099 [2024-12-10 21:36:53.853409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.099 [2024-12-10 21:36:53.853417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.099 [2024-12-10 21:36:53.854203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.099 [2024-12-10 21:36:53.854357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:53.099 [2024-12-10 21:36:53.854482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:53.099 [2024-12-10 21:36:53.854726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.357 [2024-12-10 21:36:53.902029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:53.357 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.357 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:53.357 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.357 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.357 21:36:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 [2024-12-10 21:36:54.011106] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 Malloc0 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 [2024-12-10 21:36:54.077842] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:53.357 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.358 { 00:09:53.358 "params": { 00:09:53.358 "name": "Nvme$subsystem", 00:09:53.358 "trtype": "$TEST_TRANSPORT", 00:09:53.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.358 "adrfam": "ipv4", 00:09:53.358 "trsvcid": "$NVMF_PORT", 00:09:53.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.358 "hdgst": ${hdgst:-false}, 00:09:53.358 "ddgst": ${ddgst:-false} 00:09:53.358 }, 00:09:53.358 "method": "bdev_nvme_attach_controller" 00:09:53.358 } 00:09:53.358 EOF 00:09:53.358 )") 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:53.358 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.358 "params": { 00:09:53.358 "name": "Nvme1", 00:09:53.358 "trtype": "tcp", 00:09:53.358 "traddr": "10.0.0.3", 00:09:53.358 "adrfam": "ipv4", 00:09:53.358 "trsvcid": "4420", 00:09:53.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.358 "hdgst": false, 00:09:53.358 "ddgst": false 00:09:53.358 }, 00:09:53.358 "method": "bdev_nvme_attach_controller" 00:09:53.358 }' 00:09:53.358 [2024-12-10 21:36:54.132606] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:09:53.358 [2024-12-10 21:36:54.132694] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66977 ] 00:09:53.616 [2024-12-10 21:36:54.276951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.616 [2024-12-10 21:36:54.312294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.616 [2024-12-10 21:36:54.312354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.616 [2024-12-10 21:36:54.312350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.616 [2024-12-10 21:36:54.366945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:53.874 I/O targets: 00:09:53.874 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:53.874 00:09:53.874 00:09:53.874 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.874 http://cunit.sourceforge.net/ 00:09:53.874 00:09:53.874 00:09:53.874 Suite: bdevio tests on: Nvme1n1 00:09:53.874 Test: blockdev write read block ...passed 00:09:53.874 Test: blockdev write zeroes read block ...passed 00:09:53.874 Test: blockdev write zeroes read no split ...passed 00:09:53.874 Test: blockdev write zeroes read split ...passed 00:09:53.874 Test: blockdev write zeroes read split partial ...passed 00:09:53.874 Test: blockdev reset ...[2024-12-10 21:36:54.509668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:53.874 [2024-12-10 21:36:54.509822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd5b30 (9): Bad file descriptor 00:09:53.874 passed 00:09:53.874 Test: blockdev write read 8 blocks ...[2024-12-10 21:36:54.524219] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:53.874 passed 00:09:53.874 Test: blockdev write read size > 128k ...passed 00:09:53.874 Test: blockdev write read invalid size ...passed 00:09:53.874 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:53.874 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:53.874 Test: blockdev write read max offset ...passed 00:09:53.874 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:53.874 Test: blockdev writev readv 8 blocks ...passed 00:09:53.874 Test: blockdev writev readv 30 x 1block ...passed 00:09:53.874 Test: blockdev writev readv block ...passed 00:09:53.874 Test: blockdev writev readv size > 128k ...passed 00:09:53.874 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:53.874 Test: blockdev comparev and writev ...[2024-12-10 21:36:54.531815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.531863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.531888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.531901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.532202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.532222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.532242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.532254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.532557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.532577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.532597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.532610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.532892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.532918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.532939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:53.874 [2024-12-10 21:36:54.532950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:53.874 passed 00:09:53.874 Test: blockdev nvme passthru rw ...passed 00:09:53.874 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:36:54.533795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.874 [2024-12-10 21:36:54.533827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.533948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.874 [2024-12-10 21:36:54.533967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.534091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.874 [2024-12-10 21:36:54.534110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:53.874 [2024-12-10 21:36:54.534231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:53.874 [2024-12-10 21:36:54.534255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:53.874 passed 00:09:53.874 Test: blockdev nvme admin passthru ...passed 00:09:53.874 Test: blockdev copy ...passed 00:09:53.874 00:09:53.874 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.874 suites 1 1 n/a 0 0 00:09:53.874 tests 23 23 23 0 0 00:09:53.874 asserts 152 152 152 0 n/a 00:09:53.874 00:09:53.874 Elapsed time = 0.152 seconds 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.133 rmmod nvme_tcp 00:09:54.133 rmmod nvme_fabrics 00:09:54.133 rmmod nvme_keyring 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66943 ']' 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66943 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66943 ']' 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66943 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66943 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:54.133 killing process with pid 66943 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66943' 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66943 00:09:54.133 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66943 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:54.391 21:36:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:54.391 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:54.648 00:09:54.648 real 0m2.233s 00:09:54.648 user 0m5.696s 00:09:54.648 sys 0m0.708s 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.648 ************************************ 00:09:54.648 END TEST nvmf_bdevio 00:09:54.648 ************************************ 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:54.648 00:09:54.648 real 2m34.312s 00:09:54.648 user 6m45.245s 00:09:54.648 sys 0m52.698s 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.648 ************************************ 00:09:54.648 END TEST nvmf_target_core 00:09:54.648 ************************************ 00:09:54.648 21:36:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:54.648 21:36:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.648 21:36:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.648 21:36:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.648 ************************************ 00:09:54.648 START TEST nvmf_target_extra 00:09:54.648 ************************************ 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:54.648 * Looking for test storage... 00:09:54.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.648 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:54.906 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.907 --rc genhtml_branch_coverage=1 00:09:54.907 --rc genhtml_function_coverage=1 00:09:54.907 --rc genhtml_legend=1 00:09:54.907 --rc geninfo_all_blocks=1 00:09:54.907 --rc geninfo_unexecuted_blocks=1 00:09:54.907 00:09:54.907 ' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.907 --rc genhtml_branch_coverage=1 00:09:54.907 --rc genhtml_function_coverage=1 00:09:54.907 --rc genhtml_legend=1 00:09:54.907 --rc geninfo_all_blocks=1 00:09:54.907 --rc geninfo_unexecuted_blocks=1 00:09:54.907 00:09:54.907 ' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.907 --rc genhtml_branch_coverage=1 00:09:54.907 --rc genhtml_function_coverage=1 00:09:54.907 --rc genhtml_legend=1 00:09:54.907 --rc geninfo_all_blocks=1 00:09:54.907 --rc geninfo_unexecuted_blocks=1 00:09:54.907 00:09:54.907 ' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.907 --rc genhtml_branch_coverage=1 00:09:54.907 --rc genhtml_function_coverage=1 00:09:54.907 --rc genhtml_legend=1 00:09:54.907 --rc geninfo_all_blocks=1 00:09:54.907 --rc geninfo_unexecuted_blocks=1 00:09:54.907 00:09:54.907 ' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.907 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:54.907 ************************************ 00:09:54.907 START TEST nvmf_auth_target 00:09:54.907 ************************************ 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:54.907 * Looking for test storage... 00:09:54.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.907 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.166 --rc genhtml_branch_coverage=1 00:09:55.166 --rc genhtml_function_coverage=1 00:09:55.166 --rc genhtml_legend=1 00:09:55.166 --rc geninfo_all_blocks=1 00:09:55.166 --rc geninfo_unexecuted_blocks=1 00:09:55.166 00:09:55.166 ' 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.166 --rc genhtml_branch_coverage=1 00:09:55.166 --rc genhtml_function_coverage=1 00:09:55.166 --rc genhtml_legend=1 00:09:55.166 --rc geninfo_all_blocks=1 00:09:55.166 --rc geninfo_unexecuted_blocks=1 00:09:55.166 00:09:55.166 ' 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.166 --rc genhtml_branch_coverage=1 00:09:55.166 --rc genhtml_function_coverage=1 00:09:55.166 --rc genhtml_legend=1 00:09:55.166 --rc geninfo_all_blocks=1 00:09:55.166 --rc geninfo_unexecuted_blocks=1 00:09:55.166 00:09:55.166 ' 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.166 --rc genhtml_branch_coverage=1 00:09:55.166 --rc genhtml_function_coverage=1 00:09:55.166 --rc genhtml_legend=1 00:09:55.166 --rc geninfo_all_blocks=1 00:09:55.166 --rc geninfo_unexecuted_blocks=1 00:09:55.166 00:09:55.166 ' 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.166 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.167 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:55.167 Cannot find device "nvmf_init_br" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:55.167 Cannot find device "nvmf_init_br2" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:55.167 Cannot find device "nvmf_tgt_br" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.167 Cannot find device "nvmf_tgt_br2" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:55.167 Cannot find device "nvmf_init_br" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:55.167 Cannot find device "nvmf_init_br2" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:55.167 Cannot find device "nvmf_tgt_br" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:55.167 Cannot find device "nvmf_tgt_br2" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:55.167 Cannot find device "nvmf_br" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:55.167 Cannot find device "nvmf_init_if" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:55.167 Cannot find device "nvmf_init_if2" 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:55.167 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.168 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:55.168 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.168 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.168 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:55.168 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.168 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.425 21:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:55.425 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:55.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:55.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:55.426 00:09:55.426 --- 10.0.0.3 ping statistics --- 00:09:55.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.426 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:55.426 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:55.426 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:09:55.426 00:09:55.426 --- 10.0.0.4 ping statistics --- 00:09:55.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.426 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:55.426 00:09:55.426 --- 10.0.0.1 ping statistics --- 00:09:55.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.426 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:55.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:09:55.426 00:09:55.426 --- 10.0.0.2 ping statistics --- 00:09:55.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.426 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.426 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67259 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67259 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67259 ']' 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.684 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67278 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4f576714f432c7a3a21ab233ceddf92d2036c62e000b3716 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zdp 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4f576714f432c7a3a21ab233ceddf92d2036c62e000b3716 0 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4f576714f432c7a3a21ab233ceddf92d2036c62e000b3716 0 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4f576714f432c7a3a21ab233ceddf92d2036c62e000b3716 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zdp 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zdp 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.zdp 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8e6d1eb44dee2ba4157d980069c2ce3d532af7c843d4ffd61fa31acd169becea 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.trv 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8e6d1eb44dee2ba4157d980069c2ce3d532af7c843d4ffd61fa31acd169becea 3 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8e6d1eb44dee2ba4157d980069c2ce3d532af7c843d4ffd61fa31acd169becea 3 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8e6d1eb44dee2ba4157d980069c2ce3d532af7c843d4ffd61fa31acd169becea 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.trv 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.trv 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.trv 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:55.943 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:55.944 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:55.944 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:55.944 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:55.944 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c22f60820d39223e95517870d932a762 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ipi 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c22f60820d39223e95517870d932a762 1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c22f60820d39223e95517870d932a762 1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c22f60820d39223e95517870d932a762 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ipi 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ipi 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Ipi 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=98170da0da657fefc7bc400997ea664d58fb64642d9430d3 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lyI 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 98170da0da657fefc7bc400997ea664d58fb64642d9430d3 2 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 98170da0da657fefc7bc400997ea664d58fb64642d9430d3 2 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=98170da0da657fefc7bc400997ea664d58fb64642d9430d3 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lyI 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lyI 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.lyI 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=06fa39214196da20090cb6ff8684cc39b93089cc3de2325e 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Ei 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 06fa39214196da20090cb6ff8684cc39b93089cc3de2325e 2 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 06fa39214196da20090cb6ff8684cc39b93089cc3de2325e 2 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=06fa39214196da20090cb6ff8684cc39b93089cc3de2325e 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Ei 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Ei 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4Ei 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2a53d96511978c68e9bd09a6675c5db8 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Thm 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2a53d96511978c68e9bd09a6675c5db8 1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2a53d96511978c68e9bd09a6675c5db8 1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2a53d96511978c68e9bd09a6675c5db8 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:56.203 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Thm 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Thm 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Thm 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4fc89effe435868e1b394e4bb60337e6c8647082dfd86c682ef38ab82e1442a7 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Qjx 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4fc89effe435868e1b394e4bb60337e6c8647082dfd86c682ef38ab82e1442a7 3 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4fc89effe435868e1b394e4bb60337e6c8647082dfd86c682ef38ab82e1442a7 3 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4fc89effe435868e1b394e4bb60337e6c8647082dfd86c682ef38ab82e1442a7 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:56.204 21:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Qjx 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Qjx 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Qjx 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67259 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67259 ']' 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.462 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67278 /var/tmp/host.sock 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67278 ']' 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.755 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zdp 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zdp 00:09:57.014 21:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zdp 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.trv ]] 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.trv 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.trv 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.trv 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ipi 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Ipi 00:09:57.582 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Ipi 00:09:57.839 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.lyI ]] 00:09:57.840 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lyI 00:09:57.840 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.840 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.098 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.098 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lyI 00:09:58.098 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lyI 00:09:58.357 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:58.357 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Ei 00:09:58.357 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.357 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.357 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.357 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4Ei 00:09:58.357 21:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4Ei 00:09:58.614 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Thm ]] 00:09:58.614 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Thm 00:09:58.614 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.614 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.615 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.615 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Thm 00:09:58.615 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Thm 00:09:58.874 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:58.874 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qjx 00:09:58.874 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.874 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.874 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.874 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Qjx 00:09:58.874 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Qjx 00:09:59.133 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:59.133 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:59.133 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.133 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:59.133 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.133 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.392 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:59.392 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.392 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:59.392 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:59.393 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:59.393 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.393 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.393 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.393 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.393 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.393 21:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.393 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.393 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.651 00:09:59.651 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.651 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.652 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.910 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.910 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.910 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.910 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.910 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.910 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.910 { 00:09:59.910 "cntlid": 1, 00:09:59.910 "qid": 0, 00:09:59.910 "state": "enabled", 00:09:59.910 "thread": "nvmf_tgt_poll_group_000", 00:09:59.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:09:59.910 "listen_address": { 00:09:59.910 "trtype": "TCP", 00:09:59.910 "adrfam": "IPv4", 00:09:59.910 "traddr": "10.0.0.3", 00:09:59.910 "trsvcid": "4420" 00:09:59.910 }, 00:09:59.910 "peer_address": { 00:09:59.910 "trtype": "TCP", 00:09:59.910 "adrfam": "IPv4", 00:09:59.910 "traddr": "10.0.0.1", 00:09:59.910 "trsvcid": "53758" 00:09:59.910 }, 00:09:59.910 "auth": { 00:09:59.910 "state": "completed", 00:09:59.910 "digest": "sha256", 00:09:59.911 "dhgroup": "null" 00:09:59.911 } 00:09:59.911 } 00:09:59.911 ]' 00:09:59.911 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.170 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.170 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:00.170 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:00.170 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:00.170 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.170 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.170 21:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.430 21:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:00.430 21:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:05.700 21:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.700 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.701 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.959 00:10:05.959 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.959 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.959 21:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.525 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.525 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.525 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.526 { 00:10:06.526 "cntlid": 3, 00:10:06.526 "qid": 0, 00:10:06.526 "state": "enabled", 00:10:06.526 "thread": "nvmf_tgt_poll_group_000", 00:10:06.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:06.526 "listen_address": { 00:10:06.526 "trtype": "TCP", 00:10:06.526 "adrfam": "IPv4", 00:10:06.526 "traddr": "10.0.0.3", 00:10:06.526 "trsvcid": "4420" 00:10:06.526 }, 00:10:06.526 "peer_address": { 00:10:06.526 "trtype": "TCP", 00:10:06.526 "adrfam": "IPv4", 00:10:06.526 "traddr": "10.0.0.1", 00:10:06.526 "trsvcid": "48824" 00:10:06.526 }, 00:10:06.526 "auth": { 00:10:06.526 "state": "completed", 00:10:06.526 "digest": "sha256", 00:10:06.526 "dhgroup": "null" 00:10:06.526 } 00:10:06.526 } 00:10:06.526 ]' 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.526 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.783 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:06.783 21:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:07.718 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.975 21:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.541 00:10:08.541 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:08.541 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.541 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.799 { 00:10:08.799 "cntlid": 5, 00:10:08.799 "qid": 0, 00:10:08.799 "state": "enabled", 00:10:08.799 "thread": "nvmf_tgt_poll_group_000", 00:10:08.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:08.799 "listen_address": { 00:10:08.799 "trtype": "TCP", 00:10:08.799 "adrfam": "IPv4", 00:10:08.799 "traddr": "10.0.0.3", 00:10:08.799 "trsvcid": "4420" 00:10:08.799 }, 00:10:08.799 "peer_address": { 00:10:08.799 "trtype": "TCP", 00:10:08.799 "adrfam": "IPv4", 00:10:08.799 "traddr": "10.0.0.1", 00:10:08.799 "trsvcid": "48844" 00:10:08.799 }, 00:10:08.799 "auth": { 00:10:08.799 "state": "completed", 00:10:08.799 "digest": "sha256", 00:10:08.799 "dhgroup": "null" 00:10:08.799 } 00:10:08.799 } 00:10:08.799 ]' 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.799 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.366 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:09.366 21:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.933 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.193 21:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.484 00:10:10.484 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.484 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.484 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.050 { 00:10:11.050 "cntlid": 7, 00:10:11.050 "qid": 0, 00:10:11.050 "state": "enabled", 00:10:11.050 "thread": "nvmf_tgt_poll_group_000", 00:10:11.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:11.050 "listen_address": { 00:10:11.050 "trtype": "TCP", 00:10:11.050 "adrfam": "IPv4", 00:10:11.050 "traddr": "10.0.0.3", 00:10:11.050 "trsvcid": "4420" 00:10:11.050 }, 00:10:11.050 "peer_address": { 00:10:11.050 "trtype": "TCP", 00:10:11.050 "adrfam": "IPv4", 00:10:11.050 "traddr": "10.0.0.1", 00:10:11.050 "trsvcid": "48864" 00:10:11.050 }, 00:10:11.050 "auth": { 00:10:11.050 "state": "completed", 00:10:11.050 "digest": "sha256", 00:10:11.050 "dhgroup": "null" 00:10:11.050 } 00:10:11.050 } 00:10:11.050 ]' 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.050 21:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.309 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:11.309 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:12.241 21:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.499 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.758 00:10:12.758 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.758 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.758 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.016 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.016 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.016 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.016 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.016 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.016 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.016 { 00:10:13.016 "cntlid": 9, 00:10:13.016 "qid": 0, 00:10:13.016 "state": "enabled", 00:10:13.016 "thread": "nvmf_tgt_poll_group_000", 00:10:13.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:13.016 "listen_address": { 00:10:13.016 "trtype": "TCP", 00:10:13.016 "adrfam": "IPv4", 00:10:13.016 "traddr": "10.0.0.3", 00:10:13.016 "trsvcid": "4420" 00:10:13.016 }, 00:10:13.016 "peer_address": { 00:10:13.016 "trtype": "TCP", 00:10:13.016 "adrfam": "IPv4", 00:10:13.016 "traddr": "10.0.0.1", 00:10:13.016 "trsvcid": "48884" 00:10:13.016 }, 00:10:13.016 "auth": { 00:10:13.016 "state": "completed", 00:10:13.016 "digest": "sha256", 00:10:13.016 "dhgroup": "ffdhe2048" 00:10:13.016 } 00:10:13.016 } 00:10:13.016 ]' 00:10:13.016 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.274 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.274 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.274 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:13.274 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.274 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.274 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.274 21:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.532 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:13.532 21:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:14.466 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.724 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.289 00:10:15.289 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.289 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.289 21:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.547 { 00:10:15.547 "cntlid": 11, 00:10:15.547 "qid": 0, 00:10:15.547 "state": "enabled", 00:10:15.547 "thread": "nvmf_tgt_poll_group_000", 00:10:15.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:15.547 "listen_address": { 00:10:15.547 "trtype": "TCP", 00:10:15.547 "adrfam": "IPv4", 00:10:15.547 "traddr": "10.0.0.3", 00:10:15.547 "trsvcid": "4420" 00:10:15.547 }, 00:10:15.547 "peer_address": { 00:10:15.547 "trtype": "TCP", 00:10:15.547 "adrfam": "IPv4", 00:10:15.547 "traddr": "10.0.0.1", 00:10:15.547 "trsvcid": "58252" 00:10:15.547 }, 00:10:15.547 "auth": { 00:10:15.547 "state": "completed", 00:10:15.547 "digest": "sha256", 00:10:15.547 "dhgroup": "ffdhe2048" 00:10:15.547 } 00:10:15.547 } 00:10:15.547 ]' 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.547 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.805 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:15.805 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.805 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.805 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.805 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.063 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:16.063 21:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.996 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.253 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:17.253 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.254 21:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.512 00:10:17.512 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.512 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.512 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.769 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.769 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.769 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.769 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.027 { 00:10:18.027 "cntlid": 13, 00:10:18.027 "qid": 0, 00:10:18.027 "state": "enabled", 00:10:18.027 "thread": "nvmf_tgt_poll_group_000", 00:10:18.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:18.027 "listen_address": { 00:10:18.027 "trtype": "TCP", 00:10:18.027 "adrfam": "IPv4", 00:10:18.027 "traddr": "10.0.0.3", 00:10:18.027 "trsvcid": "4420" 00:10:18.027 }, 00:10:18.027 "peer_address": { 00:10:18.027 "trtype": "TCP", 00:10:18.027 "adrfam": "IPv4", 00:10:18.027 "traddr": "10.0.0.1", 00:10:18.027 "trsvcid": "58280" 00:10:18.027 }, 00:10:18.027 "auth": { 00:10:18.027 "state": "completed", 00:10:18.027 "digest": "sha256", 00:10:18.027 "dhgroup": "ffdhe2048" 00:10:18.027 } 00:10:18.027 } 00:10:18.027 ]' 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.027 21:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.284 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:18.284 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:19.219 21:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:19.538 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:19.796 00:10:19.796 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.796 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.796 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.054 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.054 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.054 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.054 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.312 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.312 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.312 { 00:10:20.312 "cntlid": 15, 00:10:20.312 "qid": 0, 00:10:20.312 "state": "enabled", 00:10:20.312 "thread": "nvmf_tgt_poll_group_000", 00:10:20.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:20.312 "listen_address": { 00:10:20.312 "trtype": "TCP", 00:10:20.312 "adrfam": "IPv4", 00:10:20.312 "traddr": "10.0.0.3", 00:10:20.312 "trsvcid": "4420" 00:10:20.312 }, 00:10:20.312 "peer_address": { 00:10:20.312 "trtype": "TCP", 00:10:20.312 "adrfam": "IPv4", 00:10:20.312 "traddr": "10.0.0.1", 00:10:20.312 "trsvcid": "58312" 00:10:20.312 }, 00:10:20.312 "auth": { 00:10:20.312 "state": "completed", 00:10:20.312 "digest": "sha256", 00:10:20.312 "dhgroup": "ffdhe2048" 00:10:20.312 } 00:10:20.312 } 00:10:20.312 ]' 00:10:20.312 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.312 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.313 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.313 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:20.313 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.313 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.313 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.313 21:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.571 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:20.571 21:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:21.504 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.504 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:21.504 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.504 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.504 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.504 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:21.504 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.505 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:21.505 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.763 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.021 00:10:22.021 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.021 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.021 21:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.587 { 00:10:22.587 "cntlid": 17, 00:10:22.587 "qid": 0, 00:10:22.587 "state": "enabled", 00:10:22.587 "thread": "nvmf_tgt_poll_group_000", 00:10:22.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:22.587 "listen_address": { 00:10:22.587 "trtype": "TCP", 00:10:22.587 "adrfam": "IPv4", 00:10:22.587 "traddr": "10.0.0.3", 00:10:22.587 "trsvcid": "4420" 00:10:22.587 }, 00:10:22.587 "peer_address": { 00:10:22.587 "trtype": "TCP", 00:10:22.587 "adrfam": "IPv4", 00:10:22.587 "traddr": "10.0.0.1", 00:10:22.587 "trsvcid": "58342" 00:10:22.587 }, 00:10:22.587 "auth": { 00:10:22.587 "state": "completed", 00:10:22.587 "digest": "sha256", 00:10:22.587 "dhgroup": "ffdhe3072" 00:10:22.587 } 00:10:22.587 } 00:10:22.587 ]' 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.587 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.152 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:23.152 21:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:23.717 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.975 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:23.975 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.975 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.975 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.975 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.975 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:23.975 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.233 21:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.489 00:10:24.489 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.489 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.489 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.054 { 00:10:25.054 "cntlid": 19, 00:10:25.054 "qid": 0, 00:10:25.054 "state": "enabled", 00:10:25.054 "thread": "nvmf_tgt_poll_group_000", 00:10:25.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:25.054 "listen_address": { 00:10:25.054 "trtype": "TCP", 00:10:25.054 "adrfam": "IPv4", 00:10:25.054 "traddr": "10.0.0.3", 00:10:25.054 "trsvcid": "4420" 00:10:25.054 }, 00:10:25.054 "peer_address": { 00:10:25.054 "trtype": "TCP", 00:10:25.054 "adrfam": "IPv4", 00:10:25.054 "traddr": "10.0.0.1", 00:10:25.054 "trsvcid": "53388" 00:10:25.054 }, 00:10:25.054 "auth": { 00:10:25.054 "state": "completed", 00:10:25.054 "digest": "sha256", 00:10:25.054 "dhgroup": "ffdhe3072" 00:10:25.054 } 00:10:25.054 } 00:10:25.054 ]' 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.054 21:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.313 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:25.313 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.247 21:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.505 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.763 00:10:26.763 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.763 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.763 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.021 { 00:10:27.021 "cntlid": 21, 00:10:27.021 "qid": 0, 00:10:27.021 "state": "enabled", 00:10:27.021 "thread": "nvmf_tgt_poll_group_000", 00:10:27.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:27.021 "listen_address": { 00:10:27.021 "trtype": "TCP", 00:10:27.021 "adrfam": "IPv4", 00:10:27.021 "traddr": "10.0.0.3", 00:10:27.021 "trsvcid": "4420" 00:10:27.021 }, 00:10:27.021 "peer_address": { 00:10:27.021 "trtype": "TCP", 00:10:27.021 "adrfam": "IPv4", 00:10:27.021 "traddr": "10.0.0.1", 00:10:27.021 "trsvcid": "53426" 00:10:27.021 }, 00:10:27.021 "auth": { 00:10:27.021 "state": "completed", 00:10:27.021 "digest": "sha256", 00:10:27.021 "dhgroup": "ffdhe3072" 00:10:27.021 } 00:10:27.021 } 00:10:27.021 ]' 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.021 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.280 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:27.280 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.280 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.280 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.280 21:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.539 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:27.539 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:28.104 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.105 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:28.105 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.105 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.105 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.105 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.105 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:28.105 21:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:28.671 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:28.929 00:10:28.929 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.930 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.930 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.188 { 00:10:29.188 "cntlid": 23, 00:10:29.188 "qid": 0, 00:10:29.188 "state": "enabled", 00:10:29.188 "thread": "nvmf_tgt_poll_group_000", 00:10:29.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:29.188 "listen_address": { 00:10:29.188 "trtype": "TCP", 00:10:29.188 "adrfam": "IPv4", 00:10:29.188 "traddr": "10.0.0.3", 00:10:29.188 "trsvcid": "4420" 00:10:29.188 }, 00:10:29.188 "peer_address": { 00:10:29.188 "trtype": "TCP", 00:10:29.188 "adrfam": "IPv4", 00:10:29.188 "traddr": "10.0.0.1", 00:10:29.188 "trsvcid": "53460" 00:10:29.188 }, 00:10:29.188 "auth": { 00:10:29.188 "state": "completed", 00:10:29.188 "digest": "sha256", 00:10:29.188 "dhgroup": "ffdhe3072" 00:10:29.188 } 00:10:29.188 } 00:10:29.188 ]' 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:29.188 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.446 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.446 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.446 21:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.703 21:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:29.703 21:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.648 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.215 00:10:31.215 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.215 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.215 21:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.473 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.473 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.473 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.473 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.732 { 00:10:31.732 "cntlid": 25, 00:10:31.732 "qid": 0, 00:10:31.732 "state": "enabled", 00:10:31.732 "thread": "nvmf_tgt_poll_group_000", 00:10:31.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:31.732 "listen_address": { 00:10:31.732 "trtype": "TCP", 00:10:31.732 "adrfam": "IPv4", 00:10:31.732 "traddr": "10.0.0.3", 00:10:31.732 "trsvcid": "4420" 00:10:31.732 }, 00:10:31.732 "peer_address": { 00:10:31.732 "trtype": "TCP", 00:10:31.732 "adrfam": "IPv4", 00:10:31.732 "traddr": "10.0.0.1", 00:10:31.732 "trsvcid": "53486" 00:10:31.732 }, 00:10:31.732 "auth": { 00:10:31.732 "state": "completed", 00:10:31.732 "digest": "sha256", 00:10:31.732 "dhgroup": "ffdhe4096" 00:10:31.732 } 00:10:31.732 } 00:10:31.732 ]' 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.732 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.990 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:31.990 21:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:32.924 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.183 21:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.765 00:10:33.765 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:33.765 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.765 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.050 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.050 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.050 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.051 { 00:10:34.051 "cntlid": 27, 00:10:34.051 "qid": 0, 00:10:34.051 "state": "enabled", 00:10:34.051 "thread": "nvmf_tgt_poll_group_000", 00:10:34.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:34.051 "listen_address": { 00:10:34.051 "trtype": "TCP", 00:10:34.051 "adrfam": "IPv4", 00:10:34.051 "traddr": "10.0.0.3", 00:10:34.051 "trsvcid": "4420" 00:10:34.051 }, 00:10:34.051 "peer_address": { 00:10:34.051 "trtype": "TCP", 00:10:34.051 "adrfam": "IPv4", 00:10:34.051 "traddr": "10.0.0.1", 00:10:34.051 "trsvcid": "53526" 00:10:34.051 }, 00:10:34.051 "auth": { 00:10:34.051 "state": "completed", 00:10:34.051 "digest": "sha256", 00:10:34.051 "dhgroup": "ffdhe4096" 00:10:34.051 } 00:10:34.051 } 00:10:34.051 ]' 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.051 21:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.616 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:34.616 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.181 21:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.440 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.698 00:10:35.957 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.957 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.957 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.215 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.215 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.215 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.215 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.215 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.215 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.215 { 00:10:36.215 "cntlid": 29, 00:10:36.215 "qid": 0, 00:10:36.215 "state": "enabled", 00:10:36.215 "thread": "nvmf_tgt_poll_group_000", 00:10:36.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:36.215 "listen_address": { 00:10:36.215 "trtype": "TCP", 00:10:36.215 "adrfam": "IPv4", 00:10:36.215 "traddr": "10.0.0.3", 00:10:36.215 "trsvcid": "4420" 00:10:36.215 }, 00:10:36.215 "peer_address": { 00:10:36.215 "trtype": "TCP", 00:10:36.215 "adrfam": "IPv4", 00:10:36.215 "traddr": "10.0.0.1", 00:10:36.215 "trsvcid": "43218" 00:10:36.215 }, 00:10:36.215 "auth": { 00:10:36.215 "state": "completed", 00:10:36.215 "digest": "sha256", 00:10:36.215 "dhgroup": "ffdhe4096" 00:10:36.215 } 00:10:36.215 } 00:10:36.215 ]' 00:10:36.216 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.216 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.216 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.216 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:36.216 21:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.474 21:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.474 21:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.474 21:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.731 21:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:36.731 21:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:37.298 21:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.298 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:37.298 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.298 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.298 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.298 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.298 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.298 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:37.578 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:38.143 00:10:38.143 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.143 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.144 21:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.403 { 00:10:38.403 "cntlid": 31, 00:10:38.403 "qid": 0, 00:10:38.403 "state": "enabled", 00:10:38.403 "thread": "nvmf_tgt_poll_group_000", 00:10:38.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:38.403 "listen_address": { 00:10:38.403 "trtype": "TCP", 00:10:38.403 "adrfam": "IPv4", 00:10:38.403 "traddr": "10.0.0.3", 00:10:38.403 "trsvcid": "4420" 00:10:38.403 }, 00:10:38.403 "peer_address": { 00:10:38.403 "trtype": "TCP", 00:10:38.403 "adrfam": "IPv4", 00:10:38.403 "traddr": "10.0.0.1", 00:10:38.403 "trsvcid": "43252" 00:10:38.403 }, 00:10:38.403 "auth": { 00:10:38.403 "state": "completed", 00:10:38.403 "digest": "sha256", 00:10:38.403 "dhgroup": "ffdhe4096" 00:10:38.403 } 00:10:38.403 } 00:10:38.403 ]' 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.403 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.661 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:38.661 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.661 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.661 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.661 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.919 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:38.920 21:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:39.853 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.111 21:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.677 00:10:40.677 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.677 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.677 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.936 { 00:10:40.936 "cntlid": 33, 00:10:40.936 "qid": 0, 00:10:40.936 "state": "enabled", 00:10:40.936 "thread": "nvmf_tgt_poll_group_000", 00:10:40.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:40.936 "listen_address": { 00:10:40.936 "trtype": "TCP", 00:10:40.936 "adrfam": "IPv4", 00:10:40.936 "traddr": "10.0.0.3", 00:10:40.936 "trsvcid": "4420" 00:10:40.936 }, 00:10:40.936 "peer_address": { 00:10:40.936 "trtype": "TCP", 00:10:40.936 "adrfam": "IPv4", 00:10:40.936 "traddr": "10.0.0.1", 00:10:40.936 "trsvcid": "43284" 00:10:40.936 }, 00:10:40.936 "auth": { 00:10:40.936 "state": "completed", 00:10:40.936 "digest": "sha256", 00:10:40.936 "dhgroup": "ffdhe6144" 00:10:40.936 } 00:10:40.936 } 00:10:40.936 ]' 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.936 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.194 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:41.194 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.194 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.194 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.194 21:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.453 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:41.453 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:42.388 21:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.646 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.904 00:10:43.162 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.162 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.162 21:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.420 { 00:10:43.420 "cntlid": 35, 00:10:43.420 "qid": 0, 00:10:43.420 "state": "enabled", 00:10:43.420 "thread": "nvmf_tgt_poll_group_000", 00:10:43.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:43.420 "listen_address": { 00:10:43.420 "trtype": "TCP", 00:10:43.420 "adrfam": "IPv4", 00:10:43.420 "traddr": "10.0.0.3", 00:10:43.420 "trsvcid": "4420" 00:10:43.420 }, 00:10:43.420 "peer_address": { 00:10:43.420 "trtype": "TCP", 00:10:43.420 "adrfam": "IPv4", 00:10:43.420 "traddr": "10.0.0.1", 00:10:43.420 "trsvcid": "43296" 00:10:43.420 }, 00:10:43.420 "auth": { 00:10:43.420 "state": "completed", 00:10:43.420 "digest": "sha256", 00:10:43.420 "dhgroup": "ffdhe6144" 00:10:43.420 } 00:10:43.420 } 00:10:43.420 ]' 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:43.420 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.678 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.678 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.678 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.936 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:43.936 21:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:44.867 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.125 21:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.383 00:10:45.642 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.642 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.642 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.900 { 00:10:45.900 "cntlid": 37, 00:10:45.900 "qid": 0, 00:10:45.900 "state": "enabled", 00:10:45.900 "thread": "nvmf_tgt_poll_group_000", 00:10:45.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:45.900 "listen_address": { 00:10:45.900 "trtype": "TCP", 00:10:45.900 "adrfam": "IPv4", 00:10:45.900 "traddr": "10.0.0.3", 00:10:45.900 "trsvcid": "4420" 00:10:45.900 }, 00:10:45.900 "peer_address": { 00:10:45.900 "trtype": "TCP", 00:10:45.900 "adrfam": "IPv4", 00:10:45.900 "traddr": "10.0.0.1", 00:10:45.900 "trsvcid": "55388" 00:10:45.900 }, 00:10:45.900 "auth": { 00:10:45.900 "state": "completed", 00:10:45.900 "digest": "sha256", 00:10:45.900 "dhgroup": "ffdhe6144" 00:10:45.900 } 00:10:45.900 } 00:10:45.900 ]' 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:45.900 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.158 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.158 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.158 21:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.416 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:46.416 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.350 21:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.350 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:47.350 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.350 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:47.350 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:47.350 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:47.350 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.350 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:10:47.351 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.351 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.351 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.351 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:47.351 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:47.351 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:47.916 00:10:48.174 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.174 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.174 21:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.432 { 00:10:48.432 "cntlid": 39, 00:10:48.432 "qid": 0, 00:10:48.432 "state": "enabled", 00:10:48.432 "thread": "nvmf_tgt_poll_group_000", 00:10:48.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:48.432 "listen_address": { 00:10:48.432 "trtype": "TCP", 00:10:48.432 "adrfam": "IPv4", 00:10:48.432 "traddr": "10.0.0.3", 00:10:48.432 "trsvcid": "4420" 00:10:48.432 }, 00:10:48.432 "peer_address": { 00:10:48.432 "trtype": "TCP", 00:10:48.432 "adrfam": "IPv4", 00:10:48.432 "traddr": "10.0.0.1", 00:10:48.432 "trsvcid": "55416" 00:10:48.432 }, 00:10:48.432 "auth": { 00:10:48.432 "state": "completed", 00:10:48.432 "digest": "sha256", 00:10:48.432 "dhgroup": "ffdhe6144" 00:10:48.432 } 00:10:48.432 } 00:10:48.432 ]' 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.432 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.690 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.690 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.690 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.948 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:48.948 21:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.882 21:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:50.816 00:10:50.816 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.816 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.816 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.074 { 00:10:51.074 "cntlid": 41, 00:10:51.074 "qid": 0, 00:10:51.074 "state": "enabled", 00:10:51.074 "thread": "nvmf_tgt_poll_group_000", 00:10:51.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:51.074 "listen_address": { 00:10:51.074 "trtype": "TCP", 00:10:51.074 "adrfam": "IPv4", 00:10:51.074 "traddr": "10.0.0.3", 00:10:51.074 "trsvcid": "4420" 00:10:51.074 }, 00:10:51.074 "peer_address": { 00:10:51.074 "trtype": "TCP", 00:10:51.074 "adrfam": "IPv4", 00:10:51.074 "traddr": "10.0.0.1", 00:10:51.074 "trsvcid": "55446" 00:10:51.074 }, 00:10:51.074 "auth": { 00:10:51.074 "state": "completed", 00:10:51.074 "digest": "sha256", 00:10:51.074 "dhgroup": "ffdhe8192" 00:10:51.074 } 00:10:51.074 } 00:10:51.074 ]' 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.074 21:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.642 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:51.642 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.207 21:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.465 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.030 00:10:53.289 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.289 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.289 21:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.547 { 00:10:53.547 "cntlid": 43, 00:10:53.547 "qid": 0, 00:10:53.547 "state": "enabled", 00:10:53.547 "thread": "nvmf_tgt_poll_group_000", 00:10:53.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:53.547 "listen_address": { 00:10:53.547 "trtype": "TCP", 00:10:53.547 "adrfam": "IPv4", 00:10:53.547 "traddr": "10.0.0.3", 00:10:53.547 "trsvcid": "4420" 00:10:53.547 }, 00:10:53.547 "peer_address": { 00:10:53.547 "trtype": "TCP", 00:10:53.547 "adrfam": "IPv4", 00:10:53.547 "traddr": "10.0.0.1", 00:10:53.547 "trsvcid": "55460" 00:10:53.547 }, 00:10:53.547 "auth": { 00:10:53.547 "state": "completed", 00:10:53.547 "digest": "sha256", 00:10:53.547 "dhgroup": "ffdhe8192" 00:10:53.547 } 00:10:53.547 } 00:10:53.547 ]' 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.547 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.114 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:54.114 21:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.679 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.973 21:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.907 00:10:55.907 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.907 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.907 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.165 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.165 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.166 { 00:10:56.166 "cntlid": 45, 00:10:56.166 "qid": 0, 00:10:56.166 "state": "enabled", 00:10:56.166 "thread": "nvmf_tgt_poll_group_000", 00:10:56.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:56.166 "listen_address": { 00:10:56.166 "trtype": "TCP", 00:10:56.166 "adrfam": "IPv4", 00:10:56.166 "traddr": "10.0.0.3", 00:10:56.166 "trsvcid": "4420" 00:10:56.166 }, 00:10:56.166 "peer_address": { 00:10:56.166 "trtype": "TCP", 00:10:56.166 "adrfam": "IPv4", 00:10:56.166 "traddr": "10.0.0.1", 00:10:56.166 "trsvcid": "45080" 00:10:56.166 }, 00:10:56.166 "auth": { 00:10:56.166 "state": "completed", 00:10:56.166 "digest": "sha256", 00:10:56.166 "dhgroup": "ffdhe8192" 00:10:56.166 } 00:10:56.166 } 00:10:56.166 ]' 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:56.166 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.423 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.423 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.423 21:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.681 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:56.681 21:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.615 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:57.873 21:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.453 00:10:58.453 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.453 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.453 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.018 { 00:10:59.018 "cntlid": 47, 00:10:59.018 "qid": 0, 00:10:59.018 "state": "enabled", 00:10:59.018 "thread": "nvmf_tgt_poll_group_000", 00:10:59.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:10:59.018 "listen_address": { 00:10:59.018 "trtype": "TCP", 00:10:59.018 "adrfam": "IPv4", 00:10:59.018 "traddr": "10.0.0.3", 00:10:59.018 "trsvcid": "4420" 00:10:59.018 }, 00:10:59.018 "peer_address": { 00:10:59.018 "trtype": "TCP", 00:10:59.018 "adrfam": "IPv4", 00:10:59.018 "traddr": "10.0.0.1", 00:10:59.018 "trsvcid": "45098" 00:10:59.018 }, 00:10:59.018 "auth": { 00:10:59.018 "state": "completed", 00:10:59.018 "digest": "sha256", 00:10:59.018 "dhgroup": "ffdhe8192" 00:10:59.018 } 00:10:59.018 } 00:10:59.018 ]' 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.018 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.276 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:10:59.276 21:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:00.210 21:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.467 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.726 00:11:00.983 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.983 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.984 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.241 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.241 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.241 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.241 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.241 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.241 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.241 { 00:11:01.241 "cntlid": 49, 00:11:01.241 "qid": 0, 00:11:01.241 "state": "enabled", 00:11:01.241 "thread": "nvmf_tgt_poll_group_000", 00:11:01.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:01.241 "listen_address": { 00:11:01.241 "trtype": "TCP", 00:11:01.241 "adrfam": "IPv4", 00:11:01.241 "traddr": "10.0.0.3", 00:11:01.241 "trsvcid": "4420" 00:11:01.241 }, 00:11:01.241 "peer_address": { 00:11:01.241 "trtype": "TCP", 00:11:01.241 "adrfam": "IPv4", 00:11:01.241 "traddr": "10.0.0.1", 00:11:01.241 "trsvcid": "45130" 00:11:01.241 }, 00:11:01.241 "auth": { 00:11:01.241 "state": "completed", 00:11:01.241 "digest": "sha384", 00:11:01.241 "dhgroup": "null" 00:11:01.241 } 00:11:01.241 } 00:11:01.241 ]' 00:11:01.241 21:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.498 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.498 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.498 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:01.498 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.498 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.498 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.498 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.755 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:01.755 21:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.689 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.946 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.204 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.204 21:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.462 00:11:03.462 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.462 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.462 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.026 { 00:11:04.026 "cntlid": 51, 00:11:04.026 "qid": 0, 00:11:04.026 "state": "enabled", 00:11:04.026 "thread": "nvmf_tgt_poll_group_000", 00:11:04.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:04.026 "listen_address": { 00:11:04.026 "trtype": "TCP", 00:11:04.026 "adrfam": "IPv4", 00:11:04.026 "traddr": "10.0.0.3", 00:11:04.026 "trsvcid": "4420" 00:11:04.026 }, 00:11:04.026 "peer_address": { 00:11:04.026 "trtype": "TCP", 00:11:04.026 "adrfam": "IPv4", 00:11:04.026 "traddr": "10.0.0.1", 00:11:04.026 "trsvcid": "45166" 00:11:04.026 }, 00:11:04.026 "auth": { 00:11:04.026 "state": "completed", 00:11:04.026 "digest": "sha384", 00:11:04.026 "dhgroup": "null" 00:11:04.026 } 00:11:04.026 } 00:11:04.026 ]' 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.026 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.283 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:04.283 21:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.214 21:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.472 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.039 00:11:06.039 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.039 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.039 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.298 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.298 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.298 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.298 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.298 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.298 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.298 { 00:11:06.298 "cntlid": 53, 00:11:06.298 "qid": 0, 00:11:06.298 "state": "enabled", 00:11:06.298 "thread": "nvmf_tgt_poll_group_000", 00:11:06.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:06.298 "listen_address": { 00:11:06.298 "trtype": "TCP", 00:11:06.298 "adrfam": "IPv4", 00:11:06.298 "traddr": "10.0.0.3", 00:11:06.298 "trsvcid": "4420" 00:11:06.298 }, 00:11:06.298 "peer_address": { 00:11:06.298 "trtype": "TCP", 00:11:06.298 "adrfam": "IPv4", 00:11:06.298 "traddr": "10.0.0.1", 00:11:06.298 "trsvcid": "39850" 00:11:06.298 }, 00:11:06.298 "auth": { 00:11:06.298 "state": "completed", 00:11:06.298 "digest": "sha384", 00:11:06.298 "dhgroup": "null" 00:11:06.298 } 00:11:06.298 } 00:11:06.298 ]' 00:11:06.298 21:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.298 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.299 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.557 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:06.557 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.557 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.557 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.557 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.814 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:06.815 21:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.749 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.314 00:11:08.314 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.314 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.314 21:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.572 { 00:11:08.572 "cntlid": 55, 00:11:08.572 "qid": 0, 00:11:08.572 "state": "enabled", 00:11:08.572 "thread": "nvmf_tgt_poll_group_000", 00:11:08.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:08.572 "listen_address": { 00:11:08.572 "trtype": "TCP", 00:11:08.572 "adrfam": "IPv4", 00:11:08.572 "traddr": "10.0.0.3", 00:11:08.572 "trsvcid": "4420" 00:11:08.572 }, 00:11:08.572 "peer_address": { 00:11:08.572 "trtype": "TCP", 00:11:08.572 "adrfam": "IPv4", 00:11:08.572 "traddr": "10.0.0.1", 00:11:08.572 "trsvcid": "39888" 00:11:08.572 }, 00:11:08.572 "auth": { 00:11:08.572 "state": "completed", 00:11:08.572 "digest": "sha384", 00:11:08.572 "dhgroup": "null" 00:11:08.572 } 00:11:08.572 } 00:11:08.572 ]' 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.572 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.830 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:08.830 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.830 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.830 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.830 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.088 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:09.088 21:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.021 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.278 21:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.535 00:11:10.793 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.793 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.793 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.051 { 00:11:11.051 "cntlid": 57, 00:11:11.051 "qid": 0, 00:11:11.051 "state": "enabled", 00:11:11.051 "thread": "nvmf_tgt_poll_group_000", 00:11:11.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:11.051 "listen_address": { 00:11:11.051 "trtype": "TCP", 00:11:11.051 "adrfam": "IPv4", 00:11:11.051 "traddr": "10.0.0.3", 00:11:11.051 "trsvcid": "4420" 00:11:11.051 }, 00:11:11.051 "peer_address": { 00:11:11.051 "trtype": "TCP", 00:11:11.051 "adrfam": "IPv4", 00:11:11.051 "traddr": "10.0.0.1", 00:11:11.051 "trsvcid": "39910" 00:11:11.051 }, 00:11:11.051 "auth": { 00:11:11.051 "state": "completed", 00:11:11.051 "digest": "sha384", 00:11:11.051 "dhgroup": "ffdhe2048" 00:11:11.051 } 00:11:11.051 } 00:11:11.051 ]' 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.051 21:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.615 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:11.615 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.180 21:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.438 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:12.438 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.438 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.439 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.740 00:11:12.997 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.997 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.997 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.257 { 00:11:13.257 "cntlid": 59, 00:11:13.257 "qid": 0, 00:11:13.257 "state": "enabled", 00:11:13.257 "thread": "nvmf_tgt_poll_group_000", 00:11:13.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:13.257 "listen_address": { 00:11:13.257 "trtype": "TCP", 00:11:13.257 "adrfam": "IPv4", 00:11:13.257 "traddr": "10.0.0.3", 00:11:13.257 "trsvcid": "4420" 00:11:13.257 }, 00:11:13.257 "peer_address": { 00:11:13.257 "trtype": "TCP", 00:11:13.257 "adrfam": "IPv4", 00:11:13.257 "traddr": "10.0.0.1", 00:11:13.257 "trsvcid": "39932" 00:11:13.257 }, 00:11:13.257 "auth": { 00:11:13.257 "state": "completed", 00:11:13.257 "digest": "sha384", 00:11:13.257 "dhgroup": "ffdhe2048" 00:11:13.257 } 00:11:13.257 } 00:11:13.257 ]' 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.257 21:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.257 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.257 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.257 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.822 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:13.822 21:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.755 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.013 21:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.578 00:11:15.578 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.578 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.578 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.835 { 00:11:15.835 "cntlid": 61, 00:11:15.835 "qid": 0, 00:11:15.835 "state": "enabled", 00:11:15.835 "thread": "nvmf_tgt_poll_group_000", 00:11:15.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:15.835 "listen_address": { 00:11:15.835 "trtype": "TCP", 00:11:15.835 "adrfam": "IPv4", 00:11:15.835 "traddr": "10.0.0.3", 00:11:15.835 "trsvcid": "4420" 00:11:15.835 }, 00:11:15.835 "peer_address": { 00:11:15.835 "trtype": "TCP", 00:11:15.835 "adrfam": "IPv4", 00:11:15.835 "traddr": "10.0.0.1", 00:11:15.835 "trsvcid": "41910" 00:11:15.835 }, 00:11:15.835 "auth": { 00:11:15.835 "state": "completed", 00:11:15.835 "digest": "sha384", 00:11:15.835 "dhgroup": "ffdhe2048" 00:11:15.835 } 00:11:15.835 } 00:11:15.835 ]' 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.835 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.836 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.094 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.094 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.094 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.353 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:16.353 21:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:17.322 21:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:17.887 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.145 00:11:18.402 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.402 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.402 21:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.660 { 00:11:18.660 "cntlid": 63, 00:11:18.660 "qid": 0, 00:11:18.660 "state": "enabled", 00:11:18.660 "thread": "nvmf_tgt_poll_group_000", 00:11:18.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:18.660 "listen_address": { 00:11:18.660 "trtype": "TCP", 00:11:18.660 "adrfam": "IPv4", 00:11:18.660 "traddr": "10.0.0.3", 00:11:18.660 "trsvcid": "4420" 00:11:18.660 }, 00:11:18.660 "peer_address": { 00:11:18.660 "trtype": "TCP", 00:11:18.660 "adrfam": "IPv4", 00:11:18.660 "traddr": "10.0.0.1", 00:11:18.660 "trsvcid": "41930" 00:11:18.660 }, 00:11:18.660 "auth": { 00:11:18.660 "state": "completed", 00:11:18.660 "digest": "sha384", 00:11:18.660 "dhgroup": "ffdhe2048" 00:11:18.660 } 00:11:18.660 } 00:11:18.660 ]' 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.660 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.918 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.918 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.918 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.918 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.918 21:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.483 21:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:19.483 21:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.419 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.053 21:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.311 00:11:21.311 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.311 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.311 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.952 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.952 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.952 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.952 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.952 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.952 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.952 { 00:11:21.952 "cntlid": 65, 00:11:21.952 "qid": 0, 00:11:21.952 "state": "enabled", 00:11:21.952 "thread": "nvmf_tgt_poll_group_000", 00:11:21.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:21.952 "listen_address": { 00:11:21.952 "trtype": "TCP", 00:11:21.952 "adrfam": "IPv4", 00:11:21.952 "traddr": "10.0.0.3", 00:11:21.952 "trsvcid": "4420" 00:11:21.952 }, 00:11:21.952 "peer_address": { 00:11:21.952 "trtype": "TCP", 00:11:21.952 "adrfam": "IPv4", 00:11:21.953 "traddr": "10.0.0.1", 00:11:21.953 "trsvcid": "41954" 00:11:21.953 }, 00:11:21.953 "auth": { 00:11:21.953 "state": "completed", 00:11:21.953 "digest": "sha384", 00:11:21.953 "dhgroup": "ffdhe3072" 00:11:21.953 } 00:11:21.953 } 00:11:21.953 ]' 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.953 21:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.518 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:22.518 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:23.084 21:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.650 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.910 00:11:23.910 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.910 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.910 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.168 { 00:11:24.168 "cntlid": 67, 00:11:24.168 "qid": 0, 00:11:24.168 "state": "enabled", 00:11:24.168 "thread": "nvmf_tgt_poll_group_000", 00:11:24.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:24.168 "listen_address": { 00:11:24.168 "trtype": "TCP", 00:11:24.168 "adrfam": "IPv4", 00:11:24.168 "traddr": "10.0.0.3", 00:11:24.168 "trsvcid": "4420" 00:11:24.168 }, 00:11:24.168 "peer_address": { 00:11:24.168 "trtype": "TCP", 00:11:24.168 "adrfam": "IPv4", 00:11:24.168 "traddr": "10.0.0.1", 00:11:24.168 "trsvcid": "41972" 00:11:24.168 }, 00:11:24.168 "auth": { 00:11:24.168 "state": "completed", 00:11:24.168 "digest": "sha384", 00:11:24.168 "dhgroup": "ffdhe3072" 00:11:24.168 } 00:11:24.168 } 00:11:24.168 ]' 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.168 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.169 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.427 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.427 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.427 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.427 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.427 21:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.685 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:24.685 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:25.251 21:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.251 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:25.251 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.251 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.251 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.251 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.251 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:25.251 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.817 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.074 00:11:26.074 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.074 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.074 21:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.638 { 00:11:26.638 "cntlid": 69, 00:11:26.638 "qid": 0, 00:11:26.638 "state": "enabled", 00:11:26.638 "thread": "nvmf_tgt_poll_group_000", 00:11:26.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:26.638 "listen_address": { 00:11:26.638 "trtype": "TCP", 00:11:26.638 "adrfam": "IPv4", 00:11:26.638 "traddr": "10.0.0.3", 00:11:26.638 "trsvcid": "4420" 00:11:26.638 }, 00:11:26.638 "peer_address": { 00:11:26.638 "trtype": "TCP", 00:11:26.638 "adrfam": "IPv4", 00:11:26.638 "traddr": "10.0.0.1", 00:11:26.638 "trsvcid": "43886" 00:11:26.638 }, 00:11:26.638 "auth": { 00:11:26.638 "state": "completed", 00:11:26.638 "digest": "sha384", 00:11:26.638 "dhgroup": "ffdhe3072" 00:11:26.638 } 00:11:26.638 } 00:11:26.638 ]' 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.638 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.209 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:27.209 21:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:28.142 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.401 21:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.658 00:11:28.658 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.658 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.658 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.222 { 00:11:29.222 "cntlid": 71, 00:11:29.222 "qid": 0, 00:11:29.222 "state": "enabled", 00:11:29.222 "thread": "nvmf_tgt_poll_group_000", 00:11:29.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:29.222 "listen_address": { 00:11:29.222 "trtype": "TCP", 00:11:29.222 "adrfam": "IPv4", 00:11:29.222 "traddr": "10.0.0.3", 00:11:29.222 "trsvcid": "4420" 00:11:29.222 }, 00:11:29.222 "peer_address": { 00:11:29.222 "trtype": "TCP", 00:11:29.222 "adrfam": "IPv4", 00:11:29.222 "traddr": "10.0.0.1", 00:11:29.222 "trsvcid": "43920" 00:11:29.222 }, 00:11:29.222 "auth": { 00:11:29.222 "state": "completed", 00:11:29.222 "digest": "sha384", 00:11:29.222 "dhgroup": "ffdhe3072" 00:11:29.222 } 00:11:29.222 } 00:11:29.222 ]' 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.222 21:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.787 21:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:29.787 21:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.717 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.975 21:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.538 00:11:31.538 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.538 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.538 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.796 { 00:11:31.796 "cntlid": 73, 00:11:31.796 "qid": 0, 00:11:31.796 "state": "enabled", 00:11:31.796 "thread": "nvmf_tgt_poll_group_000", 00:11:31.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:31.796 "listen_address": { 00:11:31.796 "trtype": "TCP", 00:11:31.796 "adrfam": "IPv4", 00:11:31.796 "traddr": "10.0.0.3", 00:11:31.796 "trsvcid": "4420" 00:11:31.796 }, 00:11:31.796 "peer_address": { 00:11:31.796 "trtype": "TCP", 00:11:31.796 "adrfam": "IPv4", 00:11:31.796 "traddr": "10.0.0.1", 00:11:31.796 "trsvcid": "43938" 00:11:31.796 }, 00:11:31.796 "auth": { 00:11:31.796 "state": "completed", 00:11:31.796 "digest": "sha384", 00:11:31.796 "dhgroup": "ffdhe4096" 00:11:31.796 } 00:11:31.796 } 00:11:31.796 ]' 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.796 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.426 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:32.426 21:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:32.991 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.249 21:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.249 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.249 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.249 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.249 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.814 00:11:33.814 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.814 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.814 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.072 { 00:11:34.072 "cntlid": 75, 00:11:34.072 "qid": 0, 00:11:34.072 "state": "enabled", 00:11:34.072 "thread": "nvmf_tgt_poll_group_000", 00:11:34.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:34.072 "listen_address": { 00:11:34.072 "trtype": "TCP", 00:11:34.072 "adrfam": "IPv4", 00:11:34.072 "traddr": "10.0.0.3", 00:11:34.072 "trsvcid": "4420" 00:11:34.072 }, 00:11:34.072 "peer_address": { 00:11:34.072 "trtype": "TCP", 00:11:34.072 "adrfam": "IPv4", 00:11:34.072 "traddr": "10.0.0.1", 00:11:34.072 "trsvcid": "43956" 00:11:34.072 }, 00:11:34.072 "auth": { 00:11:34.072 "state": "completed", 00:11:34.072 "digest": "sha384", 00:11:34.072 "dhgroup": "ffdhe4096" 00:11:34.072 } 00:11:34.072 } 00:11:34.072 ]' 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:34.072 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.330 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.330 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.330 21:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.588 21:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:34.588 21:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.519 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.777 21:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.342 00:11:36.342 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.342 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.342 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.908 { 00:11:36.908 "cntlid": 77, 00:11:36.908 "qid": 0, 00:11:36.908 "state": "enabled", 00:11:36.908 "thread": "nvmf_tgt_poll_group_000", 00:11:36.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:36.908 "listen_address": { 00:11:36.908 "trtype": "TCP", 00:11:36.908 "adrfam": "IPv4", 00:11:36.908 "traddr": "10.0.0.3", 00:11:36.908 "trsvcid": "4420" 00:11:36.908 }, 00:11:36.908 "peer_address": { 00:11:36.908 "trtype": "TCP", 00:11:36.908 "adrfam": "IPv4", 00:11:36.908 "traddr": "10.0.0.1", 00:11:36.908 "trsvcid": "57912" 00:11:36.908 }, 00:11:36.908 "auth": { 00:11:36.908 "state": "completed", 00:11:36.908 "digest": "sha384", 00:11:36.908 "dhgroup": "ffdhe4096" 00:11:36.908 } 00:11:36.908 } 00:11:36.908 ]' 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.908 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.166 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:37.166 21:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:38.137 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.138 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:38.138 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.138 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.138 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.138 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.138 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:38.138 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:38.396 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:38.396 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.396 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:38.396 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:38.396 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:38.396 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.397 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:11:38.397 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.397 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.397 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.397 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:38.397 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.397 21:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.655 00:11:38.655 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.655 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.655 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.222 { 00:11:39.222 "cntlid": 79, 00:11:39.222 "qid": 0, 00:11:39.222 "state": "enabled", 00:11:39.222 "thread": "nvmf_tgt_poll_group_000", 00:11:39.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:39.222 "listen_address": { 00:11:39.222 "trtype": "TCP", 00:11:39.222 "adrfam": "IPv4", 00:11:39.222 "traddr": "10.0.0.3", 00:11:39.222 "trsvcid": "4420" 00:11:39.222 }, 00:11:39.222 "peer_address": { 00:11:39.222 "trtype": "TCP", 00:11:39.222 "adrfam": "IPv4", 00:11:39.222 "traddr": "10.0.0.1", 00:11:39.222 "trsvcid": "57944" 00:11:39.222 }, 00:11:39.222 "auth": { 00:11:39.222 "state": "completed", 00:11:39.222 "digest": "sha384", 00:11:39.222 "dhgroup": "ffdhe4096" 00:11:39.222 } 00:11:39.222 } 00:11:39.222 ]' 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.222 21:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.788 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:39.788 21:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:40.354 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.354 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:40.354 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.354 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.355 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.355 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:40.355 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.355 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:40.355 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.612 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.870 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.870 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.870 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.870 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.488 00:11:41.488 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.488 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.488 21:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.746 { 00:11:41.746 "cntlid": 81, 00:11:41.746 "qid": 0, 00:11:41.746 "state": "enabled", 00:11:41.746 "thread": "nvmf_tgt_poll_group_000", 00:11:41.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:41.746 "listen_address": { 00:11:41.746 "trtype": "TCP", 00:11:41.746 "adrfam": "IPv4", 00:11:41.746 "traddr": "10.0.0.3", 00:11:41.746 "trsvcid": "4420" 00:11:41.746 }, 00:11:41.746 "peer_address": { 00:11:41.746 "trtype": "TCP", 00:11:41.746 "adrfam": "IPv4", 00:11:41.746 "traddr": "10.0.0.1", 00:11:41.746 "trsvcid": "57964" 00:11:41.746 }, 00:11:41.746 "auth": { 00:11:41.746 "state": "completed", 00:11:41.746 "digest": "sha384", 00:11:41.746 "dhgroup": "ffdhe6144" 00:11:41.746 } 00:11:41.746 } 00:11:41.746 ]' 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.746 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.004 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:42.004 21:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:42.937 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.196 21:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.761 00:11:43.761 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.761 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.761 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.019 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.019 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.019 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.019 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.278 { 00:11:44.278 "cntlid": 83, 00:11:44.278 "qid": 0, 00:11:44.278 "state": "enabled", 00:11:44.278 "thread": "nvmf_tgt_poll_group_000", 00:11:44.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:44.278 "listen_address": { 00:11:44.278 "trtype": "TCP", 00:11:44.278 "adrfam": "IPv4", 00:11:44.278 "traddr": "10.0.0.3", 00:11:44.278 "trsvcid": "4420" 00:11:44.278 }, 00:11:44.278 "peer_address": { 00:11:44.278 "trtype": "TCP", 00:11:44.278 "adrfam": "IPv4", 00:11:44.278 "traddr": "10.0.0.1", 00:11:44.278 "trsvcid": "57984" 00:11:44.278 }, 00:11:44.278 "auth": { 00:11:44.278 "state": "completed", 00:11:44.278 "digest": "sha384", 00:11:44.278 "dhgroup": "ffdhe6144" 00:11:44.278 } 00:11:44.278 } 00:11:44.278 ]' 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.278 21:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.535 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:44.535 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:45.469 21:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:45.727 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:45.727 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.728 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.294 00:11:46.294 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.294 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.294 21:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.552 { 00:11:46.552 "cntlid": 85, 00:11:46.552 "qid": 0, 00:11:46.552 "state": "enabled", 00:11:46.552 "thread": "nvmf_tgt_poll_group_000", 00:11:46.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:46.552 "listen_address": { 00:11:46.552 "trtype": "TCP", 00:11:46.552 "adrfam": "IPv4", 00:11:46.552 "traddr": "10.0.0.3", 00:11:46.552 "trsvcid": "4420" 00:11:46.552 }, 00:11:46.552 "peer_address": { 00:11:46.552 "trtype": "TCP", 00:11:46.552 "adrfam": "IPv4", 00:11:46.552 "traddr": "10.0.0.1", 00:11:46.552 "trsvcid": "40734" 00:11:46.552 }, 00:11:46.552 "auth": { 00:11:46.552 "state": "completed", 00:11:46.552 "digest": "sha384", 00:11:46.552 "dhgroup": "ffdhe6144" 00:11:46.552 } 00:11:46.552 } 00:11:46.552 ]' 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.552 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.119 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:47.119 21:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:47.686 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.945 21:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.513 00:11:48.513 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.513 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.513 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.080 { 00:11:49.080 "cntlid": 87, 00:11:49.080 "qid": 0, 00:11:49.080 "state": "enabled", 00:11:49.080 "thread": "nvmf_tgt_poll_group_000", 00:11:49.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:49.080 "listen_address": { 00:11:49.080 "trtype": "TCP", 00:11:49.080 "adrfam": "IPv4", 00:11:49.080 "traddr": "10.0.0.3", 00:11:49.080 "trsvcid": "4420" 00:11:49.080 }, 00:11:49.080 "peer_address": { 00:11:49.080 "trtype": "TCP", 00:11:49.080 "adrfam": "IPv4", 00:11:49.080 "traddr": "10.0.0.1", 00:11:49.080 "trsvcid": "40762" 00:11:49.080 }, 00:11:49.080 "auth": { 00:11:49.080 "state": "completed", 00:11:49.080 "digest": "sha384", 00:11:49.080 "dhgroup": "ffdhe6144" 00:11:49.080 } 00:11:49.080 } 00:11:49.080 ]' 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.080 21:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.647 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:49.647 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:50.213 21:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.473 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.407 00:11:51.407 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.407 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.407 21:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.665 { 00:11:51.665 "cntlid": 89, 00:11:51.665 "qid": 0, 00:11:51.665 "state": "enabled", 00:11:51.665 "thread": "nvmf_tgt_poll_group_000", 00:11:51.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:51.665 "listen_address": { 00:11:51.665 "trtype": "TCP", 00:11:51.665 "adrfam": "IPv4", 00:11:51.665 "traddr": "10.0.0.3", 00:11:51.665 "trsvcid": "4420" 00:11:51.665 }, 00:11:51.665 "peer_address": { 00:11:51.665 "trtype": "TCP", 00:11:51.665 "adrfam": "IPv4", 00:11:51.665 "traddr": "10.0.0.1", 00:11:51.665 "trsvcid": "40782" 00:11:51.665 }, 00:11:51.665 "auth": { 00:11:51.665 "state": "completed", 00:11:51.665 "digest": "sha384", 00:11:51.665 "dhgroup": "ffdhe8192" 00:11:51.665 } 00:11:51.665 } 00:11:51.665 ]' 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.665 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.923 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:51.923 21:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:52.857 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:53.116 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.117 21:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.683 00:11:53.683 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.683 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.683 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.941 { 00:11:53.941 "cntlid": 91, 00:11:53.941 "qid": 0, 00:11:53.941 "state": "enabled", 00:11:53.941 "thread": "nvmf_tgt_poll_group_000", 00:11:53.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:53.941 "listen_address": { 00:11:53.941 "trtype": "TCP", 00:11:53.941 "adrfam": "IPv4", 00:11:53.941 "traddr": "10.0.0.3", 00:11:53.941 "trsvcid": "4420" 00:11:53.941 }, 00:11:53.941 "peer_address": { 00:11:53.941 "trtype": "TCP", 00:11:53.941 "adrfam": "IPv4", 00:11:53.941 "traddr": "10.0.0.1", 00:11:53.941 "trsvcid": "40810" 00:11:53.941 }, 00:11:53.941 "auth": { 00:11:53.941 "state": "completed", 00:11:53.941 "digest": "sha384", 00:11:53.941 "dhgroup": "ffdhe8192" 00:11:53.941 } 00:11:53.941 } 00:11:53.941 ]' 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.941 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.199 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:54.199 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.199 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.199 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.199 21:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.461 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:54.461 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:55.409 21:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:55.409 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:55.409 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.409 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:55.409 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.668 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.235 00:11:56.235 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.235 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.235 21:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.494 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.494 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.494 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.494 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.494 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.494 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.494 { 00:11:56.494 "cntlid": 93, 00:11:56.494 "qid": 0, 00:11:56.494 "state": "enabled", 00:11:56.494 "thread": "nvmf_tgt_poll_group_000", 00:11:56.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:56.494 "listen_address": { 00:11:56.494 "trtype": "TCP", 00:11:56.494 "adrfam": "IPv4", 00:11:56.494 "traddr": "10.0.0.3", 00:11:56.494 "trsvcid": "4420" 00:11:56.494 }, 00:11:56.494 "peer_address": { 00:11:56.494 "trtype": "TCP", 00:11:56.494 "adrfam": "IPv4", 00:11:56.494 "traddr": "10.0.0.1", 00:11:56.494 "trsvcid": "58660" 00:11:56.494 }, 00:11:56.494 "auth": { 00:11:56.494 "state": "completed", 00:11:56.494 "digest": "sha384", 00:11:56.494 "dhgroup": "ffdhe8192" 00:11:56.494 } 00:11:56.494 } 00:11:56.494 ]' 00:11:56.494 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.752 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.752 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.752 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:56.752 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.752 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.752 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.752 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.010 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:57.011 21:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:57.946 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.205 21:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.772 00:11:58.772 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.772 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.772 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.030 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.030 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.030 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.030 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.030 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.030 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.030 { 00:11:59.030 "cntlid": 95, 00:11:59.030 "qid": 0, 00:11:59.030 "state": "enabled", 00:11:59.030 "thread": "nvmf_tgt_poll_group_000", 00:11:59.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:11:59.030 "listen_address": { 00:11:59.030 "trtype": "TCP", 00:11:59.030 "adrfam": "IPv4", 00:11:59.030 "traddr": "10.0.0.3", 00:11:59.030 "trsvcid": "4420" 00:11:59.030 }, 00:11:59.030 "peer_address": { 00:11:59.030 "trtype": "TCP", 00:11:59.030 "adrfam": "IPv4", 00:11:59.030 "traddr": "10.0.0.1", 00:11:59.030 "trsvcid": "58690" 00:11:59.030 }, 00:11:59.030 "auth": { 00:11:59.030 "state": "completed", 00:11:59.030 "digest": "sha384", 00:11:59.030 "dhgroup": "ffdhe8192" 00:11:59.030 } 00:11:59.030 } 00:11:59.030 ]' 00:11:59.030 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.289 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.289 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.289 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.289 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.289 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.289 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.289 21:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.548 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:11:59.548 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:00.484 21:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.484 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.052 00:12:01.052 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.052 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.052 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.311 { 00:12:01.311 "cntlid": 97, 00:12:01.311 "qid": 0, 00:12:01.311 "state": "enabled", 00:12:01.311 "thread": "nvmf_tgt_poll_group_000", 00:12:01.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:01.311 "listen_address": { 00:12:01.311 "trtype": "TCP", 00:12:01.311 "adrfam": "IPv4", 00:12:01.311 "traddr": "10.0.0.3", 00:12:01.311 "trsvcid": "4420" 00:12:01.311 }, 00:12:01.311 "peer_address": { 00:12:01.311 "trtype": "TCP", 00:12:01.311 "adrfam": "IPv4", 00:12:01.311 "traddr": "10.0.0.1", 00:12:01.311 "trsvcid": "58712" 00:12:01.311 }, 00:12:01.311 "auth": { 00:12:01.311 "state": "completed", 00:12:01.311 "digest": "sha512", 00:12:01.311 "dhgroup": "null" 00:12:01.311 } 00:12:01.311 } 00:12:01.311 ]' 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.311 21:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.311 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:01.311 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.311 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.311 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.311 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.877 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:01.877 21:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:02.444 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.702 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.267 00:12:03.267 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.267 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.267 21:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.526 { 00:12:03.526 "cntlid": 99, 00:12:03.526 "qid": 0, 00:12:03.526 "state": "enabled", 00:12:03.526 "thread": "nvmf_tgt_poll_group_000", 00:12:03.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:03.526 "listen_address": { 00:12:03.526 "trtype": "TCP", 00:12:03.526 "adrfam": "IPv4", 00:12:03.526 "traddr": "10.0.0.3", 00:12:03.526 "trsvcid": "4420" 00:12:03.526 }, 00:12:03.526 "peer_address": { 00:12:03.526 "trtype": "TCP", 00:12:03.526 "adrfam": "IPv4", 00:12:03.526 "traddr": "10.0.0.1", 00:12:03.526 "trsvcid": "58740" 00:12:03.526 }, 00:12:03.526 "auth": { 00:12:03.526 "state": "completed", 00:12:03.526 "digest": "sha512", 00:12:03.526 "dhgroup": "null" 00:12:03.526 } 00:12:03.526 } 00:12:03.526 ]' 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.526 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.092 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:04.092 21:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:04.657 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.916 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.175 00:12:05.175 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.175 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.175 21:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.742 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.743 { 00:12:05.743 "cntlid": 101, 00:12:05.743 "qid": 0, 00:12:05.743 "state": "enabled", 00:12:05.743 "thread": "nvmf_tgt_poll_group_000", 00:12:05.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:05.743 "listen_address": { 00:12:05.743 "trtype": "TCP", 00:12:05.743 "adrfam": "IPv4", 00:12:05.743 "traddr": "10.0.0.3", 00:12:05.743 "trsvcid": "4420" 00:12:05.743 }, 00:12:05.743 "peer_address": { 00:12:05.743 "trtype": "TCP", 00:12:05.743 "adrfam": "IPv4", 00:12:05.743 "traddr": "10.0.0.1", 00:12:05.743 "trsvcid": "37610" 00:12:05.743 }, 00:12:05.743 "auth": { 00:12:05.743 "state": "completed", 00:12:05.743 "digest": "sha512", 00:12:05.743 "dhgroup": "null" 00:12:05.743 } 00:12:05.743 } 00:12:05.743 ]' 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.743 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.001 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:06.001 21:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:06.969 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:07.227 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:07.227 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.227 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:07.227 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.228 21:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.486 00:12:07.486 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.486 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.486 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.053 { 00:12:08.053 "cntlid": 103, 00:12:08.053 "qid": 0, 00:12:08.053 "state": "enabled", 00:12:08.053 "thread": "nvmf_tgt_poll_group_000", 00:12:08.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:08.053 "listen_address": { 00:12:08.053 "trtype": "TCP", 00:12:08.053 "adrfam": "IPv4", 00:12:08.053 "traddr": "10.0.0.3", 00:12:08.053 "trsvcid": "4420" 00:12:08.053 }, 00:12:08.053 "peer_address": { 00:12:08.053 "trtype": "TCP", 00:12:08.053 "adrfam": "IPv4", 00:12:08.053 "traddr": "10.0.0.1", 00:12:08.053 "trsvcid": "37638" 00:12:08.053 }, 00:12:08.053 "auth": { 00:12:08.053 "state": "completed", 00:12:08.053 "digest": "sha512", 00:12:08.053 "dhgroup": "null" 00:12:08.053 } 00:12:08.053 } 00:12:08.053 ]' 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.053 21:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.312 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:08.312 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:09.246 21:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.504 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.505 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.505 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.505 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.505 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.763 00:12:09.763 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.763 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.763 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.331 { 00:12:10.331 "cntlid": 105, 00:12:10.331 "qid": 0, 00:12:10.331 "state": "enabled", 00:12:10.331 "thread": "nvmf_tgt_poll_group_000", 00:12:10.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:10.331 "listen_address": { 00:12:10.331 "trtype": "TCP", 00:12:10.331 "adrfam": "IPv4", 00:12:10.331 "traddr": "10.0.0.3", 00:12:10.331 "trsvcid": "4420" 00:12:10.331 }, 00:12:10.331 "peer_address": { 00:12:10.331 "trtype": "TCP", 00:12:10.331 "adrfam": "IPv4", 00:12:10.331 "traddr": "10.0.0.1", 00:12:10.331 "trsvcid": "37658" 00:12:10.331 }, 00:12:10.331 "auth": { 00:12:10.331 "state": "completed", 00:12:10.331 "digest": "sha512", 00:12:10.331 "dhgroup": "ffdhe2048" 00:12:10.331 } 00:12:10.331 } 00:12:10.331 ]' 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.331 21:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.331 21:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.331 21:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.331 21:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.590 21:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:10.590 21:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:11.525 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.784 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.043 00:12:12.301 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.301 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.301 21:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.559 { 00:12:12.559 "cntlid": 107, 00:12:12.559 "qid": 0, 00:12:12.559 "state": "enabled", 00:12:12.559 "thread": "nvmf_tgt_poll_group_000", 00:12:12.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:12.559 "listen_address": { 00:12:12.559 "trtype": "TCP", 00:12:12.559 "adrfam": "IPv4", 00:12:12.559 "traddr": "10.0.0.3", 00:12:12.559 "trsvcid": "4420" 00:12:12.559 }, 00:12:12.559 "peer_address": { 00:12:12.559 "trtype": "TCP", 00:12:12.559 "adrfam": "IPv4", 00:12:12.559 "traddr": "10.0.0.1", 00:12:12.559 "trsvcid": "37688" 00:12:12.559 }, 00:12:12.559 "auth": { 00:12:12.559 "state": "completed", 00:12:12.559 "digest": "sha512", 00:12:12.559 "dhgroup": "ffdhe2048" 00:12:12.559 } 00:12:12.559 } 00:12:12.559 ]' 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:12.559 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.817 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.817 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.817 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.076 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:13.076 21:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.011 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.012 21:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.579 00:12:14.579 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.579 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.579 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.837 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.837 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.837 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.837 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.837 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.837 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.837 { 00:12:14.837 "cntlid": 109, 00:12:14.837 "qid": 0, 00:12:14.837 "state": "enabled", 00:12:14.837 "thread": "nvmf_tgt_poll_group_000", 00:12:14.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:14.837 "listen_address": { 00:12:14.837 "trtype": "TCP", 00:12:14.837 "adrfam": "IPv4", 00:12:14.837 "traddr": "10.0.0.3", 00:12:14.837 "trsvcid": "4420" 00:12:14.837 }, 00:12:14.837 "peer_address": { 00:12:14.837 "trtype": "TCP", 00:12:14.838 "adrfam": "IPv4", 00:12:14.838 "traddr": "10.0.0.1", 00:12:14.838 "trsvcid": "47310" 00:12:14.838 }, 00:12:14.838 "auth": { 00:12:14.838 "state": "completed", 00:12:14.838 "digest": "sha512", 00:12:14.838 "dhgroup": "ffdhe2048" 00:12:14.838 } 00:12:14.838 } 00:12:14.838 ]' 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.838 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.474 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:15.474 21:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:16.041 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.299 21:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.558 00:12:16.558 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.558 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.558 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.125 { 00:12:17.125 "cntlid": 111, 00:12:17.125 "qid": 0, 00:12:17.125 "state": "enabled", 00:12:17.125 "thread": "nvmf_tgt_poll_group_000", 00:12:17.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:17.125 "listen_address": { 00:12:17.125 "trtype": "TCP", 00:12:17.125 "adrfam": "IPv4", 00:12:17.125 "traddr": "10.0.0.3", 00:12:17.125 "trsvcid": "4420" 00:12:17.125 }, 00:12:17.125 "peer_address": { 00:12:17.125 "trtype": "TCP", 00:12:17.125 "adrfam": "IPv4", 00:12:17.125 "traddr": "10.0.0.1", 00:12:17.125 "trsvcid": "47336" 00:12:17.125 }, 00:12:17.125 "auth": { 00:12:17.125 "state": "completed", 00:12:17.125 "digest": "sha512", 00:12:17.125 "dhgroup": "ffdhe2048" 00:12:17.125 } 00:12:17.125 } 00:12:17.125 ]' 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.125 21:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.383 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:17.383 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:17.950 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.950 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:17.950 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.950 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.208 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.208 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:18.208 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.208 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:18.208 21:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.466 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.724 00:12:18.724 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.724 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.724 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.997 { 00:12:18.997 "cntlid": 113, 00:12:18.997 "qid": 0, 00:12:18.997 "state": "enabled", 00:12:18.997 "thread": "nvmf_tgt_poll_group_000", 00:12:18.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:18.997 "listen_address": { 00:12:18.997 "trtype": "TCP", 00:12:18.997 "adrfam": "IPv4", 00:12:18.997 "traddr": "10.0.0.3", 00:12:18.997 "trsvcid": "4420" 00:12:18.997 }, 00:12:18.997 "peer_address": { 00:12:18.997 "trtype": "TCP", 00:12:18.997 "adrfam": "IPv4", 00:12:18.997 "traddr": "10.0.0.1", 00:12:18.997 "trsvcid": "47364" 00:12:18.997 }, 00:12:18.997 "auth": { 00:12:18.997 "state": "completed", 00:12:18.997 "digest": "sha512", 00:12:18.997 "dhgroup": "ffdhe3072" 00:12:18.997 } 00:12:18.997 } 00:12:18.997 ]' 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.997 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.256 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:19.256 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.256 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.256 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.256 21:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.516 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:19.516 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.450 21:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.708 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.709 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.967 00:12:20.967 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.967 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.967 21:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.534 { 00:12:21.534 "cntlid": 115, 00:12:21.534 "qid": 0, 00:12:21.534 "state": "enabled", 00:12:21.534 "thread": "nvmf_tgt_poll_group_000", 00:12:21.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:21.534 "listen_address": { 00:12:21.534 "trtype": "TCP", 00:12:21.534 "adrfam": "IPv4", 00:12:21.534 "traddr": "10.0.0.3", 00:12:21.534 "trsvcid": "4420" 00:12:21.534 }, 00:12:21.534 "peer_address": { 00:12:21.534 "trtype": "TCP", 00:12:21.534 "adrfam": "IPv4", 00:12:21.534 "traddr": "10.0.0.1", 00:12:21.534 "trsvcid": "47398" 00:12:21.534 }, 00:12:21.534 "auth": { 00:12:21.534 "state": "completed", 00:12:21.534 "digest": "sha512", 00:12:21.534 "dhgroup": "ffdhe3072" 00:12:21.534 } 00:12:21.534 } 00:12:21.534 ]' 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.534 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.792 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:21.792 21:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.727 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.297 00:12:23.297 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.297 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.297 21:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.555 { 00:12:23.555 "cntlid": 117, 00:12:23.555 "qid": 0, 00:12:23.555 "state": "enabled", 00:12:23.555 "thread": "nvmf_tgt_poll_group_000", 00:12:23.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:23.555 "listen_address": { 00:12:23.555 "trtype": "TCP", 00:12:23.555 "adrfam": "IPv4", 00:12:23.555 "traddr": "10.0.0.3", 00:12:23.555 "trsvcid": "4420" 00:12:23.555 }, 00:12:23.555 "peer_address": { 00:12:23.555 "trtype": "TCP", 00:12:23.555 "adrfam": "IPv4", 00:12:23.555 "traddr": "10.0.0.1", 00:12:23.555 "trsvcid": "47414" 00:12:23.555 }, 00:12:23.555 "auth": { 00:12:23.555 "state": "completed", 00:12:23.555 "digest": "sha512", 00:12:23.555 "dhgroup": "ffdhe3072" 00:12:23.555 } 00:12:23.555 } 00:12:23.555 ]' 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.555 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.813 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:23.813 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.813 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.813 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.813 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.072 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:24.072 21:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:25.006 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.264 21:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.523 00:12:25.523 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.523 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.523 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.781 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.040 { 00:12:26.040 "cntlid": 119, 00:12:26.040 "qid": 0, 00:12:26.040 "state": "enabled", 00:12:26.040 "thread": "nvmf_tgt_poll_group_000", 00:12:26.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:26.040 "listen_address": { 00:12:26.040 "trtype": "TCP", 00:12:26.040 "adrfam": "IPv4", 00:12:26.040 "traddr": "10.0.0.3", 00:12:26.040 "trsvcid": "4420" 00:12:26.040 }, 00:12:26.040 "peer_address": { 00:12:26.040 "trtype": "TCP", 00:12:26.040 "adrfam": "IPv4", 00:12:26.040 "traddr": "10.0.0.1", 00:12:26.040 "trsvcid": "43024" 00:12:26.040 }, 00:12:26.040 "auth": { 00:12:26.040 "state": "completed", 00:12:26.040 "digest": "sha512", 00:12:26.040 "dhgroup": "ffdhe3072" 00:12:26.040 } 00:12:26.040 } 00:12:26.040 ]' 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.040 21:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.300 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:26.300 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:27.236 21:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.495 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.754 00:12:27.754 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.754 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.754 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.320 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.320 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.320 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.320 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.320 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.320 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.320 { 00:12:28.320 "cntlid": 121, 00:12:28.320 "qid": 0, 00:12:28.320 "state": "enabled", 00:12:28.321 "thread": "nvmf_tgt_poll_group_000", 00:12:28.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:28.321 "listen_address": { 00:12:28.321 "trtype": "TCP", 00:12:28.321 "adrfam": "IPv4", 00:12:28.321 "traddr": "10.0.0.3", 00:12:28.321 "trsvcid": "4420" 00:12:28.321 }, 00:12:28.321 "peer_address": { 00:12:28.321 "trtype": "TCP", 00:12:28.321 "adrfam": "IPv4", 00:12:28.321 "traddr": "10.0.0.1", 00:12:28.321 "trsvcid": "43050" 00:12:28.321 }, 00:12:28.321 "auth": { 00:12:28.321 "state": "completed", 00:12:28.321 "digest": "sha512", 00:12:28.321 "dhgroup": "ffdhe4096" 00:12:28.321 } 00:12:28.321 } 00:12:28.321 ]' 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.321 21:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.580 21:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:28.580 21:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:29.516 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.774 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.032 00:12:30.032 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.032 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.032 21:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.291 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.291 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.291 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.291 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.291 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.291 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.291 { 00:12:30.291 "cntlid": 123, 00:12:30.291 "qid": 0, 00:12:30.291 "state": "enabled", 00:12:30.291 "thread": "nvmf_tgt_poll_group_000", 00:12:30.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:30.291 "listen_address": { 00:12:30.291 "trtype": "TCP", 00:12:30.291 "adrfam": "IPv4", 00:12:30.291 "traddr": "10.0.0.3", 00:12:30.291 "trsvcid": "4420" 00:12:30.291 }, 00:12:30.291 "peer_address": { 00:12:30.291 "trtype": "TCP", 00:12:30.291 "adrfam": "IPv4", 00:12:30.291 "traddr": "10.0.0.1", 00:12:30.291 "trsvcid": "43080" 00:12:30.291 }, 00:12:30.291 "auth": { 00:12:30.291 "state": "completed", 00:12:30.291 "digest": "sha512", 00:12:30.291 "dhgroup": "ffdhe4096" 00:12:30.291 } 00:12:30.291 } 00:12:30.291 ]' 00:12:30.291 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.550 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.550 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.550 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:30.550 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.550 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.550 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.550 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.809 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:30.809 21:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:31.749 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.750 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.750 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.750 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.750 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.750 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.750 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.750 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.317 00:12:32.317 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.317 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.317 21:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.575 { 00:12:32.575 "cntlid": 125, 00:12:32.575 "qid": 0, 00:12:32.575 "state": "enabled", 00:12:32.575 "thread": "nvmf_tgt_poll_group_000", 00:12:32.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:32.575 "listen_address": { 00:12:32.575 "trtype": "TCP", 00:12:32.575 "adrfam": "IPv4", 00:12:32.575 "traddr": "10.0.0.3", 00:12:32.575 "trsvcid": "4420" 00:12:32.575 }, 00:12:32.575 "peer_address": { 00:12:32.575 "trtype": "TCP", 00:12:32.575 "adrfam": "IPv4", 00:12:32.575 "traddr": "10.0.0.1", 00:12:32.575 "trsvcid": "43110" 00:12:32.575 }, 00:12:32.575 "auth": { 00:12:32.575 "state": "completed", 00:12:32.575 "digest": "sha512", 00:12:32.575 "dhgroup": "ffdhe4096" 00:12:32.575 } 00:12:32.575 } 00:12:32.575 ]' 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.575 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.833 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:32.833 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.833 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.833 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.833 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.091 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:33.091 21:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:33.658 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.916 21:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.482 00:12:34.482 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.482 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.482 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.739 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.739 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.739 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.739 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.739 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.739 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.739 { 00:12:34.739 "cntlid": 127, 00:12:34.739 "qid": 0, 00:12:34.739 "state": "enabled", 00:12:34.739 "thread": "nvmf_tgt_poll_group_000", 00:12:34.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:34.739 "listen_address": { 00:12:34.739 "trtype": "TCP", 00:12:34.739 "adrfam": "IPv4", 00:12:34.739 "traddr": "10.0.0.3", 00:12:34.739 "trsvcid": "4420" 00:12:34.739 }, 00:12:34.739 "peer_address": { 00:12:34.739 "trtype": "TCP", 00:12:34.739 "adrfam": "IPv4", 00:12:34.739 "traddr": "10.0.0.1", 00:12:34.739 "trsvcid": "43136" 00:12:34.739 }, 00:12:34.739 "auth": { 00:12:34.739 "state": "completed", 00:12:34.739 "digest": "sha512", 00:12:34.739 "dhgroup": "ffdhe4096" 00:12:34.739 } 00:12:34.739 } 00:12:34.739 ]' 00:12:34.739 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.997 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.997 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.997 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.997 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.997 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.997 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.997 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.255 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:35.255 21:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:36.189 21:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.447 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.013 00:12:37.013 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.013 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.013 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.271 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.271 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.271 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.271 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.271 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.271 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.271 { 00:12:37.271 "cntlid": 129, 00:12:37.271 "qid": 0, 00:12:37.271 "state": "enabled", 00:12:37.271 "thread": "nvmf_tgt_poll_group_000", 00:12:37.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:37.271 "listen_address": { 00:12:37.271 "trtype": "TCP", 00:12:37.271 "adrfam": "IPv4", 00:12:37.271 "traddr": "10.0.0.3", 00:12:37.271 "trsvcid": "4420" 00:12:37.271 }, 00:12:37.271 "peer_address": { 00:12:37.271 "trtype": "TCP", 00:12:37.271 "adrfam": "IPv4", 00:12:37.271 "traddr": "10.0.0.1", 00:12:37.271 "trsvcid": "51262" 00:12:37.271 }, 00:12:37.271 "auth": { 00:12:37.271 "state": "completed", 00:12:37.271 "digest": "sha512", 00:12:37.271 "dhgroup": "ffdhe6144" 00:12:37.271 } 00:12:37.271 } 00:12:37.271 ]' 00:12:37.271 21:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.271 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.271 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.529 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:37.529 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.529 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.529 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.529 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.787 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:37.787 21:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:38.784 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.041 21:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.605 00:12:39.605 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.605 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.605 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.863 { 00:12:39.863 "cntlid": 131, 00:12:39.863 "qid": 0, 00:12:39.863 "state": "enabled", 00:12:39.863 "thread": "nvmf_tgt_poll_group_000", 00:12:39.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:39.863 "listen_address": { 00:12:39.863 "trtype": "TCP", 00:12:39.863 "adrfam": "IPv4", 00:12:39.863 "traddr": "10.0.0.3", 00:12:39.863 "trsvcid": "4420" 00:12:39.863 }, 00:12:39.863 "peer_address": { 00:12:39.863 "trtype": "TCP", 00:12:39.863 "adrfam": "IPv4", 00:12:39.863 "traddr": "10.0.0.1", 00:12:39.863 "trsvcid": "51290" 00:12:39.863 }, 00:12:39.863 "auth": { 00:12:39.863 "state": "completed", 00:12:39.863 "digest": "sha512", 00:12:39.863 "dhgroup": "ffdhe6144" 00:12:39.863 } 00:12:39.863 } 00:12:39.863 ]' 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.863 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.429 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:40.429 21:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:40.997 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.255 21:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.823 00:12:41.823 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.823 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.823 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.081 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.081 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.081 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.081 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.082 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.082 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.082 { 00:12:42.082 "cntlid": 133, 00:12:42.082 "qid": 0, 00:12:42.082 "state": "enabled", 00:12:42.082 "thread": "nvmf_tgt_poll_group_000", 00:12:42.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:42.082 "listen_address": { 00:12:42.082 "trtype": "TCP", 00:12:42.082 "adrfam": "IPv4", 00:12:42.082 "traddr": "10.0.0.3", 00:12:42.082 "trsvcid": "4420" 00:12:42.082 }, 00:12:42.082 "peer_address": { 00:12:42.082 "trtype": "TCP", 00:12:42.082 "adrfam": "IPv4", 00:12:42.082 "traddr": "10.0.0.1", 00:12:42.082 "trsvcid": "51332" 00:12:42.082 }, 00:12:42.082 "auth": { 00:12:42.082 "state": "completed", 00:12:42.082 "digest": "sha512", 00:12:42.082 "dhgroup": "ffdhe6144" 00:12:42.082 } 00:12:42.082 } 00:12:42.082 ]' 00:12:42.082 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.082 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.082 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.340 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:42.340 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.340 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.340 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.340 21:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.598 21:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:42.598 21:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:43.532 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.790 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.066 00:12:44.324 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.324 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.325 21:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.582 { 00:12:44.582 "cntlid": 135, 00:12:44.582 "qid": 0, 00:12:44.582 "state": "enabled", 00:12:44.582 "thread": "nvmf_tgt_poll_group_000", 00:12:44.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:44.582 "listen_address": { 00:12:44.582 "trtype": "TCP", 00:12:44.582 "adrfam": "IPv4", 00:12:44.582 "traddr": "10.0.0.3", 00:12:44.582 "trsvcid": "4420" 00:12:44.582 }, 00:12:44.582 "peer_address": { 00:12:44.582 "trtype": "TCP", 00:12:44.582 "adrfam": "IPv4", 00:12:44.582 "traddr": "10.0.0.1", 00:12:44.582 "trsvcid": "51352" 00:12:44.582 }, 00:12:44.582 "auth": { 00:12:44.582 "state": "completed", 00:12:44.582 "digest": "sha512", 00:12:44.582 "dhgroup": "ffdhe6144" 00:12:44.582 } 00:12:44.582 } 00:12:44.582 ]' 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.582 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.840 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:44.840 21:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:45.774 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.032 21:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.598 00:12:46.598 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.598 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.598 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.163 { 00:12:47.163 "cntlid": 137, 00:12:47.163 "qid": 0, 00:12:47.163 "state": "enabled", 00:12:47.163 "thread": "nvmf_tgt_poll_group_000", 00:12:47.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:47.163 "listen_address": { 00:12:47.163 "trtype": "TCP", 00:12:47.163 "adrfam": "IPv4", 00:12:47.163 "traddr": "10.0.0.3", 00:12:47.163 "trsvcid": "4420" 00:12:47.163 }, 00:12:47.163 "peer_address": { 00:12:47.163 "trtype": "TCP", 00:12:47.163 "adrfam": "IPv4", 00:12:47.163 "traddr": "10.0.0.1", 00:12:47.163 "trsvcid": "48316" 00:12:47.163 }, 00:12:47.163 "auth": { 00:12:47.163 "state": "completed", 00:12:47.163 "digest": "sha512", 00:12:47.163 "dhgroup": "ffdhe8192" 00:12:47.163 } 00:12:47.163 } 00:12:47.163 ]' 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.163 21:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.422 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:47.422 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:48.356 21:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.614 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.220 00:12:49.221 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.221 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.221 21:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.479 { 00:12:49.479 "cntlid": 139, 00:12:49.479 "qid": 0, 00:12:49.479 "state": "enabled", 00:12:49.479 "thread": "nvmf_tgt_poll_group_000", 00:12:49.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:49.479 "listen_address": { 00:12:49.479 "trtype": "TCP", 00:12:49.479 "adrfam": "IPv4", 00:12:49.479 "traddr": "10.0.0.3", 00:12:49.479 "trsvcid": "4420" 00:12:49.479 }, 00:12:49.479 "peer_address": { 00:12:49.479 "trtype": "TCP", 00:12:49.479 "adrfam": "IPv4", 00:12:49.479 "traddr": "10.0.0.1", 00:12:49.479 "trsvcid": "48350" 00:12:49.479 }, 00:12:49.479 "auth": { 00:12:49.479 "state": "completed", 00:12:49.479 "digest": "sha512", 00:12:49.479 "dhgroup": "ffdhe8192" 00:12:49.479 } 00:12:49.479 } 00:12:49.479 ]' 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.479 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.738 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:49.738 21:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: --dhchap-ctrl-secret DHHC-1:02:OTgxNzBkYTBkYTY1N2ZlZmM3YmM0MDA5OTdlYTY2NGQ1OGZiNjQ2NDJkOTQzMGQz+yWkUA==: 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.673 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.674 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.674 21:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.608 00:12:51.608 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.609 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.609 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.867 { 00:12:51.867 "cntlid": 141, 00:12:51.867 "qid": 0, 00:12:51.867 "state": "enabled", 00:12:51.867 "thread": "nvmf_tgt_poll_group_000", 00:12:51.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:51.867 "listen_address": { 00:12:51.867 "trtype": "TCP", 00:12:51.867 "adrfam": "IPv4", 00:12:51.867 "traddr": "10.0.0.3", 00:12:51.867 "trsvcid": "4420" 00:12:51.867 }, 00:12:51.867 "peer_address": { 00:12:51.867 "trtype": "TCP", 00:12:51.867 "adrfam": "IPv4", 00:12:51.867 "traddr": "10.0.0.1", 00:12:51.867 "trsvcid": "48378" 00:12:51.867 }, 00:12:51.867 "auth": { 00:12:51.867 "state": "completed", 00:12:51.867 "digest": "sha512", 00:12:51.867 "dhgroup": "ffdhe8192" 00:12:51.867 } 00:12:51.867 } 00:12:51.867 ]' 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.867 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.125 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:52.125 21:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:01:MmE1M2Q5NjUxMTk3OGM2OGU5YmQwOWE2Njc1YzVkYjgIeROU: 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:53.058 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:53.317 21:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:53.885 00:12:54.143 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.143 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.143 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.400 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.400 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.400 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.400 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.400 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.400 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.400 { 00:12:54.400 "cntlid": 143, 00:12:54.400 "qid": 0, 00:12:54.400 "state": "enabled", 00:12:54.400 "thread": "nvmf_tgt_poll_group_000", 00:12:54.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:54.400 "listen_address": { 00:12:54.401 "trtype": "TCP", 00:12:54.401 "adrfam": "IPv4", 00:12:54.401 "traddr": "10.0.0.3", 00:12:54.401 "trsvcid": "4420" 00:12:54.401 }, 00:12:54.401 "peer_address": { 00:12:54.401 "trtype": "TCP", 00:12:54.401 "adrfam": "IPv4", 00:12:54.401 "traddr": "10.0.0.1", 00:12:54.401 "trsvcid": "48410" 00:12:54.401 }, 00:12:54.401 "auth": { 00:12:54.401 "state": "completed", 00:12:54.401 "digest": "sha512", 00:12:54.401 "dhgroup": "ffdhe8192" 00:12:54.401 } 00:12:54.401 } 00:12:54.401 ]' 00:12:54.401 21:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.401 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.401 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.401 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:54.401 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.401 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.401 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.401 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.966 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:54.966 21:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:12:55.531 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.532 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.789 21:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.812 00:12:56.812 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.812 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.812 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.071 { 00:12:57.071 "cntlid": 145, 00:12:57.071 "qid": 0, 00:12:57.071 "state": "enabled", 00:12:57.071 "thread": "nvmf_tgt_poll_group_000", 00:12:57.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:57.071 "listen_address": { 00:12:57.071 "trtype": "TCP", 00:12:57.071 "adrfam": "IPv4", 00:12:57.071 "traddr": "10.0.0.3", 00:12:57.071 "trsvcid": "4420" 00:12:57.071 }, 00:12:57.071 "peer_address": { 00:12:57.071 "trtype": "TCP", 00:12:57.071 "adrfam": "IPv4", 00:12:57.071 "traddr": "10.0.0.1", 00:12:57.071 "trsvcid": "51988" 00:12:57.071 }, 00:12:57.071 "auth": { 00:12:57.071 "state": "completed", 00:12:57.071 "digest": "sha512", 00:12:57.071 "dhgroup": "ffdhe8192" 00:12:57.071 } 00:12:57.071 } 00:12:57.071 ]' 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.071 21:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.331 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:57.331 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:00:NGY1NzY3MTRmNDMyYzdhM2EyMWFiMjMzY2VkZGY5MmQyMDM2YzYyZTAwMGIzNzE2MsLvYw==: --dhchap-ctrl-secret DHHC-1:03:OGU2ZDFlYjQ0ZGVlMmJhNDE1N2Q5ODAwNjljMmNlM2Q1MzJhZjdjODQzZDRmZmQ2MWZhMzFhY2QxNjliZWNlYbVuCqo=: 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:58.265 21:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:58.830 request: 00:12:58.830 { 00:12:58.830 "name": "nvme0", 00:12:58.830 "trtype": "tcp", 00:12:58.830 "traddr": "10.0.0.3", 00:12:58.830 "adrfam": "ipv4", 00:12:58.830 "trsvcid": "4420", 00:12:58.830 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:58.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:58.830 "prchk_reftag": false, 00:12:58.830 "prchk_guard": false, 00:12:58.830 "hdgst": false, 00:12:58.830 "ddgst": false, 00:12:58.830 "dhchap_key": "key2", 00:12:58.830 "allow_unrecognized_csi": false, 00:12:58.830 "method": "bdev_nvme_attach_controller", 00:12:58.830 "req_id": 1 00:12:58.830 } 00:12:58.830 Got JSON-RPC error response 00:12:58.830 response: 00:12:58.830 { 00:12:58.830 "code": -5, 00:12:58.830 "message": "Input/output error" 00:12:58.830 } 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:58.830 21:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:59.764 request: 00:12:59.764 { 00:12:59.764 "name": "nvme0", 00:12:59.764 "trtype": "tcp", 00:12:59.764 "traddr": "10.0.0.3", 00:12:59.764 "adrfam": "ipv4", 00:12:59.764 "trsvcid": "4420", 00:12:59.764 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:59.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:12:59.764 "prchk_reftag": false, 00:12:59.764 "prchk_guard": false, 00:12:59.764 "hdgst": false, 00:12:59.764 "ddgst": false, 00:12:59.764 "dhchap_key": "key1", 00:12:59.764 "dhchap_ctrlr_key": "ckey2", 00:12:59.764 "allow_unrecognized_csi": false, 00:12:59.764 "method": "bdev_nvme_attach_controller", 00:12:59.764 "req_id": 1 00:12:59.764 } 00:12:59.764 Got JSON-RPC error response 00:12:59.764 response: 00:12:59.764 { 00:12:59.764 "code": -5, 00:12:59.764 "message": "Input/output error" 00:12:59.764 } 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.764 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.330 request: 00:13:00.330 { 00:13:00.330 "name": "nvme0", 00:13:00.330 "trtype": "tcp", 00:13:00.330 "traddr": "10.0.0.3", 00:13:00.330 "adrfam": "ipv4", 00:13:00.330 "trsvcid": "4420", 00:13:00.330 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:00.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:13:00.330 "prchk_reftag": false, 00:13:00.330 "prchk_guard": false, 00:13:00.330 "hdgst": false, 00:13:00.330 "ddgst": false, 00:13:00.330 "dhchap_key": "key1", 00:13:00.330 "dhchap_ctrlr_key": "ckey1", 00:13:00.330 "allow_unrecognized_csi": false, 00:13:00.330 "method": "bdev_nvme_attach_controller", 00:13:00.330 "req_id": 1 00:13:00.330 } 00:13:00.330 Got JSON-RPC error response 00:13:00.330 response: 00:13:00.330 { 00:13:00.330 "code": -5, 00:13:00.330 "message": "Input/output error" 00:13:00.330 } 00:13:00.330 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:00.330 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.330 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.330 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.330 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67259 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67259 ']' 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67259 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.331 21:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67259 00:13:00.331 killing process with pid 67259 00:13:00.331 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.331 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.331 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67259' 00:13:00.331 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67259 00:13:00.331 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67259 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70529 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70529 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70529 ']' 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.589 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70529 00:13:00.847 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70529 ']' 00:13:00.848 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.848 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.848 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.848 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.848 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.107 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.107 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:01.107 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:01.107 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.107 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 null0 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zdp 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.trv ]] 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.trv 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ipi 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.lyI ]] 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lyI 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Ei 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Thm ]] 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Thm 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Qjx 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.365 21:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.737 nvme0n1 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.737 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.737 { 00:13:02.737 "cntlid": 1, 00:13:02.737 "qid": 0, 00:13:02.737 "state": "enabled", 00:13:02.737 "thread": "nvmf_tgt_poll_group_000", 00:13:02.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:13:02.737 "listen_address": { 00:13:02.737 "trtype": "TCP", 00:13:02.737 "adrfam": "IPv4", 00:13:02.737 "traddr": "10.0.0.3", 00:13:02.737 "trsvcid": "4420" 00:13:02.737 }, 00:13:02.737 "peer_address": { 00:13:02.737 "trtype": "TCP", 00:13:02.737 "adrfam": "IPv4", 00:13:02.737 "traddr": "10.0.0.1", 00:13:02.737 "trsvcid": "52064" 00:13:02.737 }, 00:13:02.737 "auth": { 00:13:02.737 "state": "completed", 00:13:02.737 "digest": "sha512", 00:13:02.738 "dhgroup": "ffdhe8192" 00:13:02.738 } 00:13:02.738 } 00:13:02.738 ]' 00:13:02.738 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.047 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.047 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.047 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.047 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.047 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.047 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.047 21:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.320 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:13:03.321 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key3 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:04.252 21:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.510 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.768 request: 00:13:04.768 { 00:13:04.768 "name": "nvme0", 00:13:04.768 "trtype": "tcp", 00:13:04.768 "traddr": "10.0.0.3", 00:13:04.768 "adrfam": "ipv4", 00:13:04.768 "trsvcid": "4420", 00:13:04.768 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:04.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:13:04.768 "prchk_reftag": false, 00:13:04.768 "prchk_guard": false, 00:13:04.768 "hdgst": false, 00:13:04.768 "ddgst": false, 00:13:04.768 "dhchap_key": "key3", 00:13:04.768 "allow_unrecognized_csi": false, 00:13:04.768 "method": "bdev_nvme_attach_controller", 00:13:04.768 "req_id": 1 00:13:04.768 } 00:13:04.768 Got JSON-RPC error response 00:13:04.768 response: 00:13:04.768 { 00:13:04.768 "code": -5, 00:13:04.768 "message": "Input/output error" 00:13:04.768 } 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:04.768 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.333 21:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.592 request: 00:13:05.592 { 00:13:05.592 "name": "nvme0", 00:13:05.592 "trtype": "tcp", 00:13:05.592 "traddr": "10.0.0.3", 00:13:05.592 "adrfam": "ipv4", 00:13:05.592 "trsvcid": "4420", 00:13:05.592 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:05.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:13:05.592 "prchk_reftag": false, 00:13:05.592 "prchk_guard": false, 00:13:05.592 "hdgst": false, 00:13:05.592 "ddgst": false, 00:13:05.592 "dhchap_key": "key3", 00:13:05.592 "allow_unrecognized_csi": false, 00:13:05.592 "method": "bdev_nvme_attach_controller", 00:13:05.592 "req_id": 1 00:13:05.592 } 00:13:05.592 Got JSON-RPC error response 00:13:05.592 response: 00:13:05.592 { 00:13:05.592 "code": -5, 00:13:05.592 "message": "Input/output error" 00:13:05.592 } 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:05.592 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.850 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:06.416 request: 00:13:06.416 { 00:13:06.416 "name": "nvme0", 00:13:06.416 "trtype": "tcp", 00:13:06.416 "traddr": "10.0.0.3", 00:13:06.416 "adrfam": "ipv4", 00:13:06.416 "trsvcid": "4420", 00:13:06.416 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:06.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:13:06.416 "prchk_reftag": false, 00:13:06.416 "prchk_guard": false, 00:13:06.416 "hdgst": false, 00:13:06.416 "ddgst": false, 00:13:06.416 "dhchap_key": "key0", 00:13:06.416 "dhchap_ctrlr_key": "key1", 00:13:06.416 "allow_unrecognized_csi": false, 00:13:06.416 "method": "bdev_nvme_attach_controller", 00:13:06.416 "req_id": 1 00:13:06.416 } 00:13:06.416 Got JSON-RPC error response 00:13:06.416 response: 00:13:06.416 { 00:13:06.416 "code": -5, 00:13:06.416 "message": "Input/output error" 00:13:06.416 } 00:13:06.417 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:06.417 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:06.417 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:06.417 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:06.417 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:06.417 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:06.417 21:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:06.675 nvme0n1 00:13:06.933 21:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:06.933 21:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.933 21:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:07.192 21:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.192 21:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.192 21:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.450 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 00:13:07.450 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.450 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.450 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.450 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:07.450 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:07.450 21:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:08.386 nvme0n1 00:13:08.386 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:08.386 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:08.386 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:08.644 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.210 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.210 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:13:09.210 21:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid 3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -l 0 --dhchap-secret DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: --dhchap-ctrl-secret DHHC-1:03:NGZjODllZmZlNDM1ODY4ZTFiMzk0ZTRiYjYwMzM3ZTZjODY0NzA4MmRmZDg2YzY4MmVmMzhhYjgyZTE0NDJhN/o3/jQ=: 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.778 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:10.037 21:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:10.603 request: 00:13:10.603 { 00:13:10.603 "name": "nvme0", 00:13:10.603 "trtype": "tcp", 00:13:10.603 "traddr": "10.0.0.3", 00:13:10.603 "adrfam": "ipv4", 00:13:10.603 "trsvcid": "4420", 00:13:10.603 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:10.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c", 00:13:10.603 "prchk_reftag": false, 00:13:10.603 "prchk_guard": false, 00:13:10.603 "hdgst": false, 00:13:10.603 "ddgst": false, 00:13:10.603 "dhchap_key": "key1", 00:13:10.603 "allow_unrecognized_csi": false, 00:13:10.603 "method": "bdev_nvme_attach_controller", 00:13:10.603 "req_id": 1 00:13:10.603 } 00:13:10.603 Got JSON-RPC error response 00:13:10.603 response: 00:13:10.603 { 00:13:10.603 "code": -5, 00:13:10.603 "message": "Input/output error" 00:13:10.603 } 00:13:10.861 21:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:10.861 21:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:10.861 21:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:10.861 21:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:10.861 21:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:10.861 21:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:10.861 21:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:11.794 nvme0n1 00:13:11.795 21:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:11.795 21:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.795 21:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:12.053 21:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.053 21:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.053 21:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.620 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:12.620 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.620 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.620 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.620 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:12.620 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:12.620 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:12.878 nvme0n1 00:13:12.878 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:12.878 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:12.878 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.136 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.136 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.136 21:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: '' 2s 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: ]] 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzIyZjYwODIwZDM5MjIzZTk1NTE3ODcwZDkzMmE3NjLur4FF: 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:13.701 21:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: 2s 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: ]] 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MDZmYTM5MjE0MTk2ZGEyMDA5MGNiNmZmODY4NGNjMzliOTMwODljYzNkZTIzMjVlXXXB+w==: 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:15.601 21:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:18.132 21:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:18.697 nvme0n1 00:13:18.697 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:18.697 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.697 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.697 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.697 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:18.697 21:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:19.631 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:20.200 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:20.200 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.200 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:20.459 21:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:20.459 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:20.459 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.459 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:20.459 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.459 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:20.459 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:21.025 request: 00:13:21.025 { 00:13:21.025 "name": "nvme0", 00:13:21.025 "dhchap_key": "key1", 00:13:21.025 "dhchap_ctrlr_key": "key3", 00:13:21.025 "method": "bdev_nvme_set_keys", 00:13:21.025 "req_id": 1 00:13:21.025 } 00:13:21.025 Got JSON-RPC error response 00:13:21.025 response: 00:13:21.025 { 00:13:21.025 "code": -13, 00:13:21.025 "message": "Permission denied" 00:13:21.025 } 00:13:21.025 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:21.025 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.025 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.025 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.025 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:21.025 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:21.025 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.282 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:21.283 21:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:22.218 21:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:22.218 21:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.218 21:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:22.784 21:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:23.728 nvme0n1 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:23.728 21:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:24.309 request: 00:13:24.309 { 00:13:24.309 "name": "nvme0", 00:13:24.309 "dhchap_key": "key2", 00:13:24.309 "dhchap_ctrlr_key": "key0", 00:13:24.309 "method": "bdev_nvme_set_keys", 00:13:24.309 "req_id": 1 00:13:24.309 } 00:13:24.309 Got JSON-RPC error response 00:13:24.309 response: 00:13:24.309 { 00:13:24.309 "code": -13, 00:13:24.309 "message": "Permission denied" 00:13:24.309 } 00:13:24.309 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:24.309 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.309 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.309 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.309 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:24.309 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.309 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:24.566 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:24.566 21:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:25.941 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:25.941 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:25.941 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.941 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:25.941 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67278 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67278 ']' 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67278 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67278 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:25.942 killing process with pid 67278 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67278' 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67278 00:13:25.942 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67278 00:13:26.200 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:26.200 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.200 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:26.200 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.200 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:26.200 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.200 21:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.200 rmmod nvme_tcp 00:13:26.200 rmmod nvme_fabrics 00:13:26.458 rmmod nvme_keyring 00:13:26.458 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.458 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70529 ']' 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70529 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70529 ']' 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70529 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70529 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.459 killing process with pid 70529 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70529' 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70529 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70529 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:26.459 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zdp /tmp/spdk.key-sha256.Ipi /tmp/spdk.key-sha384.4Ei /tmp/spdk.key-sha512.Qjx /tmp/spdk.key-sha512.trv /tmp/spdk.key-sha384.lyI /tmp/spdk.key-sha256.Thm '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:26.717 00:13:26.717 real 3m31.929s 00:13:26.717 user 8m31.165s 00:13:26.717 sys 0m31.228s 00:13:26.717 ************************************ 00:13:26.717 END TEST nvmf_auth_target 00:13:26.717 ************************************ 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:26.717 21:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:26.718 21:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:26.718 21:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.718 21:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.977 ************************************ 00:13:26.977 START TEST nvmf_bdevio_no_huge 00:13:26.977 ************************************ 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:26.977 * Looking for test storage... 00:13:26.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.977 --rc genhtml_branch_coverage=1 00:13:26.977 --rc genhtml_function_coverage=1 00:13:26.977 --rc genhtml_legend=1 00:13:26.977 --rc geninfo_all_blocks=1 00:13:26.977 --rc geninfo_unexecuted_blocks=1 00:13:26.977 00:13:26.977 ' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.977 --rc genhtml_branch_coverage=1 00:13:26.977 --rc genhtml_function_coverage=1 00:13:26.977 --rc genhtml_legend=1 00:13:26.977 --rc geninfo_all_blocks=1 00:13:26.977 --rc geninfo_unexecuted_blocks=1 00:13:26.977 00:13:26.977 ' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.977 --rc genhtml_branch_coverage=1 00:13:26.977 --rc genhtml_function_coverage=1 00:13:26.977 --rc genhtml_legend=1 00:13:26.977 --rc geninfo_all_blocks=1 00:13:26.977 --rc geninfo_unexecuted_blocks=1 00:13:26.977 00:13:26.977 ' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.977 --rc genhtml_branch_coverage=1 00:13:26.977 --rc genhtml_function_coverage=1 00:13:26.977 --rc genhtml_legend=1 00:13:26.977 --rc geninfo_all_blocks=1 00:13:26.977 --rc geninfo_unexecuted_blocks=1 00:13:26.977 00:13:26.977 ' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.977 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.978 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:26.978 Cannot find device "nvmf_init_br" 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:26.978 Cannot find device "nvmf_init_br2" 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:26.978 Cannot find device "nvmf_tgt_br" 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:26.978 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.237 Cannot find device "nvmf_tgt_br2" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:27.237 Cannot find device "nvmf_init_br" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:27.237 Cannot find device "nvmf_init_br2" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:27.237 Cannot find device "nvmf_tgt_br" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:27.237 Cannot find device "nvmf_tgt_br2" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:27.237 Cannot find device "nvmf_br" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:27.237 Cannot find device "nvmf_init_if" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:27.237 Cannot find device "nvmf_init_if2" 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:27.237 21:40:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.237 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:27.497 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.497 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:13:27.497 00:13:27.497 --- 10.0.0.3 ping statistics --- 00:13:27.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.497 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:27.497 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:27.497 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:13:27.497 00:13:27.497 --- 10.0.0.4 ping statistics --- 00:13:27.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.497 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:27.497 00:13:27.497 --- 10.0.0.1 ping statistics --- 00:13:27.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.497 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:27.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:13:27.497 00:13:27.497 --- 10.0.0.2 ping statistics --- 00:13:27.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.497 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:27.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71178 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71178 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71178 ']' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.497 21:40:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:27.497 [2024-12-10 21:40:28.181616] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:13:27.497 [2024-12-10 21:40:28.181725] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:27.755 [2024-12-10 21:40:28.351372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.755 [2024-12-10 21:40:28.427918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.755 [2024-12-10 21:40:28.427991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.755 [2024-12-10 21:40:28.428006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.755 [2024-12-10 21:40:28.428016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.755 [2024-12-10 21:40:28.428024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.755 [2024-12-10 21:40:28.428637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:13:27.755 [2024-12-10 21:40:28.429368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:13:27.755 [2024-12-10 21:40:28.429472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:13:27.755 [2024-12-10 21:40:28.429477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.755 [2024-12-10 21:40:28.436366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.690 [2024-12-10 21:40:29.207295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.690 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.691 Malloc0 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:28.691 [2024-12-10 21:40:29.245742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:28.691 { 00:13:28.691 "params": { 00:13:28.691 "name": "Nvme$subsystem", 00:13:28.691 "trtype": "$TEST_TRANSPORT", 00:13:28.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:28.691 "adrfam": "ipv4", 00:13:28.691 "trsvcid": "$NVMF_PORT", 00:13:28.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:28.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:28.691 "hdgst": ${hdgst:-false}, 00:13:28.691 "ddgst": ${ddgst:-false} 00:13:28.691 }, 00:13:28.691 "method": "bdev_nvme_attach_controller" 00:13:28.691 } 00:13:28.691 EOF 00:13:28.691 )") 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:28.691 21:40:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:28.691 "params": { 00:13:28.691 "name": "Nvme1", 00:13:28.691 "trtype": "tcp", 00:13:28.691 "traddr": "10.0.0.3", 00:13:28.691 "adrfam": "ipv4", 00:13:28.691 "trsvcid": "4420", 00:13:28.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.691 "hdgst": false, 00:13:28.691 "ddgst": false 00:13:28.691 }, 00:13:28.691 "method": "bdev_nvme_attach_controller" 00:13:28.691 }' 00:13:28.691 [2024-12-10 21:40:29.305340] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:13:28.691 [2024-12-10 21:40:29.305946] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71220 ] 00:13:28.691 [2024-12-10 21:40:29.463093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.949 [2024-12-10 21:40:29.525569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.949 [2024-12-10 21:40:29.525637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.949 [2024-12-10 21:40:29.525640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.949 [2024-12-10 21:40:29.539426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.949 I/O targets: 00:13:28.949 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:28.949 00:13:28.949 00:13:28.949 CUnit - A unit testing framework for C - Version 2.1-3 00:13:28.949 http://cunit.sourceforge.net/ 00:13:28.949 00:13:28.949 00:13:28.949 Suite: bdevio tests on: Nvme1n1 00:13:28.949 Test: blockdev write read block ...passed 00:13:28.949 Test: blockdev write zeroes read block ...passed 00:13:28.949 Test: blockdev write zeroes read no split ...passed 00:13:29.207 Test: blockdev write zeroes read split ...passed 00:13:29.207 Test: blockdev write zeroes read split partial ...passed 00:13:29.207 Test: blockdev reset ...[2024-12-10 21:40:29.748660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:29.207 [2024-12-10 21:40:29.748796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ac720 (9): Bad file descriptor 00:13:29.207 [2024-12-10 21:40:29.768192] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:29.207 passed 00:13:29.207 Test: blockdev write read 8 blocks ...passed 00:13:29.207 Test: blockdev write read size > 128k ...passed 00:13:29.207 Test: blockdev write read invalid size ...passed 00:13:29.207 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:29.207 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:29.207 Test: blockdev write read max offset ...passed 00:13:29.207 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:29.207 Test: blockdev writev readv 8 blocks ...passed 00:13:29.207 Test: blockdev writev readv 30 x 1block ...passed 00:13:29.207 Test: blockdev writev readv block ...passed 00:13:29.207 Test: blockdev writev readv size > 128k ...passed 00:13:29.207 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:29.207 Test: blockdev comparev and writev ...[2024-12-10 21:40:29.776732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.777032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.777149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.777266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.777911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.778018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.778149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.778268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.778915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.779038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.779138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.779243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.779786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.779924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.780030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:29.207 [2024-12-10 21:40:29.780121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:29.207 passed 00:13:29.207 Test: blockdev nvme passthru rw ...passed 00:13:29.207 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:29.781278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.207 [2024-12-10 21:40:29.781417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.781676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.207 [2024-12-10 21:40:29.781910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.782128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.207 [2024-12-10 21:40:29.782372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:29.207 [2024-12-10 21:40:29.782656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:29.207 [2024-12-10 21:40:29.782864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:13:29.207 Test: blockdev nvme admin passthru ...qhd:002f p:0 m:0 dnr:0 00:13:29.207 passed 00:13:29.207 Test: blockdev copy ...passed 00:13:29.207 00:13:29.207 Run Summary: Type Total Ran Passed Failed Inactive 00:13:29.207 suites 1 1 n/a 0 0 00:13:29.207 tests 23 23 23 0 0 00:13:29.207 asserts 152 152 152 0 n/a 00:13:29.207 00:13:29.207 Elapsed time = 0.173 seconds 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:29.466 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:29.725 rmmod nvme_tcp 00:13:29.725 rmmod nvme_fabrics 00:13:29.725 rmmod nvme_keyring 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71178 ']' 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71178 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71178 ']' 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71178 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71178 00:13:29.725 killing process with pid 71178 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71178' 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71178 00:13:29.725 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71178 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.295 21:40:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:30.295 ************************************ 00:13:30.295 END TEST nvmf_bdevio_no_huge 00:13:30.295 ************************************ 00:13:30.295 00:13:30.295 real 0m3.505s 00:13:30.295 user 0m10.703s 00:13:30.295 sys 0m1.405s 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.295 ************************************ 00:13:30.295 START TEST nvmf_tls 00:13:30.295 ************************************ 00:13:30.295 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:30.569 * Looking for test storage... 00:13:30.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.569 --rc genhtml_branch_coverage=1 00:13:30.569 --rc genhtml_function_coverage=1 00:13:30.569 --rc genhtml_legend=1 00:13:30.569 --rc geninfo_all_blocks=1 00:13:30.569 --rc geninfo_unexecuted_blocks=1 00:13:30.569 00:13:30.569 ' 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.569 --rc genhtml_branch_coverage=1 00:13:30.569 --rc genhtml_function_coverage=1 00:13:30.569 --rc genhtml_legend=1 00:13:30.569 --rc geninfo_all_blocks=1 00:13:30.569 --rc geninfo_unexecuted_blocks=1 00:13:30.569 00:13:30.569 ' 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.569 --rc genhtml_branch_coverage=1 00:13:30.569 --rc genhtml_function_coverage=1 00:13:30.569 --rc genhtml_legend=1 00:13:30.569 --rc geninfo_all_blocks=1 00:13:30.569 --rc geninfo_unexecuted_blocks=1 00:13:30.569 00:13:30.569 ' 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:30.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.569 --rc genhtml_branch_coverage=1 00:13:30.569 --rc genhtml_function_coverage=1 00:13:30.569 --rc genhtml_legend=1 00:13:30.569 --rc geninfo_all_blocks=1 00:13:30.569 --rc geninfo_unexecuted_blocks=1 00:13:30.569 00:13:30.569 ' 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.569 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.570 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:30.570 Cannot find device "nvmf_init_br" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:30.570 Cannot find device "nvmf_init_br2" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:30.570 Cannot find device "nvmf_tgt_br" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:30.570 Cannot find device "nvmf_tgt_br2" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:30.570 Cannot find device "nvmf_init_br" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:30.570 Cannot find device "nvmf_init_br2" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:30.570 Cannot find device "nvmf_tgt_br" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:30.570 Cannot find device "nvmf_tgt_br2" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:30.570 Cannot find device "nvmf_br" 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:30.570 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:30.829 Cannot find device "nvmf_init_if" 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:30.829 Cannot find device "nvmf_init_if2" 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:30.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:30.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:30.829 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:30.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:30.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:30.830 00:13:30.830 --- 10.0.0.3 ping statistics --- 00:13:30.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.830 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:30.830 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:30.830 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:13:30.830 00:13:30.830 --- 10.0.0.4 ping statistics --- 00:13:30.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.830 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:30.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:30.830 00:13:30.830 --- 10.0.0.1 ping statistics --- 00:13:30.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.830 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:30.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:13:30.830 00:13:30.830 --- 10.0.0.2 ping statistics --- 00:13:30.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.830 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.830 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71452 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71452 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71452 ']' 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.089 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.089 [2024-12-10 21:40:31.695615] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:13:31.089 [2024-12-10 21:40:31.695944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.089 [2024-12-10 21:40:31.850856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.348 [2024-12-10 21:40:31.883916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.348 [2024-12-10 21:40:31.884176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.348 [2024-12-10 21:40:31.884355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.348 [2024-12-10 21:40:31.884514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.348 [2024-12-10 21:40:31.884636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.348 [2024-12-10 21:40:31.884977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:31.348 21:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:31.607 true 00:13:31.607 21:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:31.607 21:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:31.865 21:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:31.865 21:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:31.865 21:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:32.124 21:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:32.124 21:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:32.383 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:32.383 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:32.383 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:32.950 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:32.950 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:33.208 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:33.208 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:33.208 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:33.208 21:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:33.476 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:33.476 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:33.476 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:33.734 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:33.734 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:34.300 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:34.300 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:34.300 21:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:34.557 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:34.557 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:34.814 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:34.814 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:34.814 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.iMHNnv7nNn 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.eJoHchHq21 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iMHNnv7nNn 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.eJoHchHq21 00:13:34.815 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:35.073 21:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:35.330 [2024-12-10 21:40:36.066716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.588 21:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.iMHNnv7nNn 00:13:35.588 21:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iMHNnv7nNn 00:13:35.588 21:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:35.588 [2024-12-10 21:40:36.354189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.846 21:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:36.104 21:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:36.362 [2024-12-10 21:40:36.918313] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:36.362 [2024-12-10 21:40:36.918563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:36.362 21:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:36.620 malloc0 00:13:36.620 21:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:36.879 21:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iMHNnv7nNn 00:13:37.138 21:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:37.396 21:40:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.iMHNnv7nNn 00:13:49.641 Initializing NVMe Controllers 00:13:49.641 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.641 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.641 Initialization complete. Launching workers. 00:13:49.641 ======================================================== 00:13:49.641 Latency(us) 00:13:49.641 Device Information : IOPS MiB/s Average min max 00:13:49.641 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9177.09 35.85 6975.57 1117.29 11823.00 00:13:49.641 ======================================================== 00:13:49.641 Total : 9177.09 35.85 6975.57 1117.29 11823.00 00:13:49.641 00:13:49.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iMHNnv7nNn 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iMHNnv7nNn 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71686 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71686 /var/tmp/bdevperf.sock 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71686 ']' 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.641 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.642 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.642 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.642 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.642 [2024-12-10 21:40:48.380593] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:13:49.642 [2024-12-10 21:40:48.380879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71686 ] 00:13:49.642 [2024-12-10 21:40:48.533151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.642 [2024-12-10 21:40:48.575272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.642 [2024-12-10 21:40:48.609657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:49.642 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.642 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:49.642 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iMHNnv7nNn 00:13:49.642 21:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.642 [2024-12-10 21:40:49.303988] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:49.642 TLSTESTn1 00:13:49.642 21:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:49.642 Running I/O for 10 seconds... 00:13:50.837 3900.00 IOPS, 15.23 MiB/s [2024-12-10T21:40:52.555Z] 3960.00 IOPS, 15.47 MiB/s [2024-12-10T21:40:53.929Z] 3909.00 IOPS, 15.27 MiB/s [2024-12-10T21:40:54.863Z] 3936.25 IOPS, 15.38 MiB/s [2024-12-10T21:40:55.797Z] 3957.20 IOPS, 15.46 MiB/s [2024-12-10T21:40:56.730Z] 3947.33 IOPS, 15.42 MiB/s [2024-12-10T21:40:57.664Z] 3936.14 IOPS, 15.38 MiB/s [2024-12-10T21:40:58.600Z] 3936.25 IOPS, 15.38 MiB/s [2024-12-10T21:40:59.542Z] 3930.22 IOPS, 15.35 MiB/s [2024-12-10T21:40:59.542Z] 3937.10 IOPS, 15.38 MiB/s 00:13:58.759 Latency(us) 00:13:58.759 [2024-12-10T21:40:59.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.759 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:58.759 Verification LBA range: start 0x0 length 0x2000 00:13:58.759 TLSTESTn1 : 10.02 3943.38 15.40 0.00 0.00 32400.19 5600.35 33840.41 00:13:58.759 [2024-12-10T21:40:59.542Z] =================================================================================================================== 00:13:58.759 [2024-12-10T21:40:59.542Z] Total : 3943.38 15.40 0.00 0.00 32400.19 5600.35 33840.41 00:13:58.759 { 00:13:58.759 "results": [ 00:13:58.759 { 00:13:58.759 "job": "TLSTESTn1", 00:13:58.759 "core_mask": "0x4", 00:13:58.759 "workload": "verify", 00:13:58.759 "status": "finished", 00:13:58.759 "verify_range": { 00:13:58.759 "start": 0, 00:13:58.759 "length": 8192 00:13:58.759 }, 00:13:58.759 "queue_depth": 128, 00:13:58.759 "io_size": 4096, 00:13:58.759 "runtime": 10.016022, 00:13:58.759 "iops": 3943.381913498193, 00:13:58.759 "mibps": 15.403835599602317, 00:13:58.759 "io_failed": 0, 00:13:58.759 "io_timeout": 0, 00:13:58.759 "avg_latency_us": 32400.190669671116, 00:13:58.759 "min_latency_us": 5600.349090909091, 00:13:58.759 "max_latency_us": 33840.40727272727 00:13:58.759 } 00:13:58.759 ], 00:13:58.759 "core_count": 1 00:13:58.759 } 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71686 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71686 ']' 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71686 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71686 00:13:59.019 killing process with pid 71686 00:13:59.019 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.019 00:13:59.019 Latency(us) 00:13:59.019 [2024-12-10T21:40:59.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.019 [2024-12-10T21:40:59.802Z] =================================================================================================================== 00:13:59.019 [2024-12-10T21:40:59.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71686' 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71686 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71686 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eJoHchHq21 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eJoHchHq21 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eJoHchHq21 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eJoHchHq21 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71820 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71820 /var/tmp/bdevperf.sock 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71820 ']' 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.019 21:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.019 [2024-12-10 21:40:59.790882] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:13:59.019 [2024-12-10 21:40:59.790999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71820 ] 00:13:59.278 [2024-12-10 21:40:59.968246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.278 [2024-12-10 21:41:00.018055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.278 [2024-12-10 21:41:00.051893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.536 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.536 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:59.536 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eJoHchHq21 00:13:59.793 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:00.359 [2024-12-10 21:41:00.860144] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.359 [2024-12-10 21:41:00.869166] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:00.359 [2024-12-10 21:41:00.869820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccd150 (107): Transport endpoint is not connected 00:14:00.359 [2024-12-10 21:41:00.870810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccd150 (9): Bad file descriptor 00:14:00.359 [2024-12-10 21:41:00.871807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:00.359 [2024-12-10 21:41:00.871831] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:00.359 [2024-12-10 21:41:00.871843] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:00.360 [2024-12-10 21:41:00.871858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:00.360 request: 00:14:00.360 { 00:14:00.360 "name": "TLSTEST", 00:14:00.360 "trtype": "tcp", 00:14:00.360 "traddr": "10.0.0.3", 00:14:00.360 "adrfam": "ipv4", 00:14:00.360 "trsvcid": "4420", 00:14:00.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.360 "prchk_reftag": false, 00:14:00.360 "prchk_guard": false, 00:14:00.360 "hdgst": false, 00:14:00.360 "ddgst": false, 00:14:00.360 "psk": "key0", 00:14:00.360 "allow_unrecognized_csi": false, 00:14:00.360 "method": "bdev_nvme_attach_controller", 00:14:00.360 "req_id": 1 00:14:00.360 } 00:14:00.360 Got JSON-RPC error response 00:14:00.360 response: 00:14:00.360 { 00:14:00.360 "code": -5, 00:14:00.360 "message": "Input/output error" 00:14:00.360 } 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71820 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71820 ']' 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71820 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71820 00:14:00.360 killing process with pid 71820 00:14:00.360 Received shutdown signal, test time was about 10.000000 seconds 00:14:00.360 00:14:00.360 Latency(us) 00:14:00.360 [2024-12-10T21:41:01.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.360 [2024-12-10T21:41:01.143Z] =================================================================================================================== 00:14:00.360 [2024-12-10T21:41:01.143Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71820' 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71820 00:14:00.360 21:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71820 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iMHNnv7nNn 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iMHNnv7nNn 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.iMHNnv7nNn 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iMHNnv7nNn 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71845 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71845 /var/tmp/bdevperf.sock 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71845 ']' 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.360 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 [2024-12-10 21:41:01.123730] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:00.360 [2024-12-10 21:41:01.124048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71845 ] 00:14:00.618 [2024-12-10 21:41:01.273955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.618 [2024-12-10 21:41:01.307245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.618 [2024-12-10 21:41:01.338129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.618 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.618 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:00.618 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iMHNnv7nNn 00:14:01.184 21:41:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:01.443 [2024-12-10 21:41:01.970520] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:01.443 [2024-12-10 21:41:01.979155] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:01.443 [2024-12-10 21:41:01.979203] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:01.443 [2024-12-10 21:41:01.979256] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:01.443 [2024-12-10 21:41:01.979327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2401150 (107): Transport endpoint is not connected 00:14:01.443 [2024-12-10 21:41:01.980317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2401150 (9): Bad file descriptor 00:14:01.443 [2024-12-10 21:41:01.981312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:01.443 [2024-12-10 21:41:01.981347] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:01.443 [2024-12-10 21:41:01.981360] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:01.443 [2024-12-10 21:41:01.981377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:01.443 request: 00:14:01.443 { 00:14:01.443 "name": "TLSTEST", 00:14:01.443 "trtype": "tcp", 00:14:01.443 "traddr": "10.0.0.3", 00:14:01.443 "adrfam": "ipv4", 00:14:01.443 "trsvcid": "4420", 00:14:01.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.443 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:01.443 "prchk_reftag": false, 00:14:01.443 "prchk_guard": false, 00:14:01.443 "hdgst": false, 00:14:01.443 "ddgst": false, 00:14:01.443 "psk": "key0", 00:14:01.443 "allow_unrecognized_csi": false, 00:14:01.443 "method": "bdev_nvme_attach_controller", 00:14:01.443 "req_id": 1 00:14:01.443 } 00:14:01.443 Got JSON-RPC error response 00:14:01.443 response: 00:14:01.443 { 00:14:01.443 "code": -5, 00:14:01.443 "message": "Input/output error" 00:14:01.443 } 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71845 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71845 ']' 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71845 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71845 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:01.443 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:01.444 killing process with pid 71845 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71845' 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71845 00:14:01.444 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.444 00:14:01.444 Latency(us) 00:14:01.444 [2024-12-10T21:41:02.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.444 [2024-12-10T21:41:02.227Z] =================================================================================================================== 00:14:01.444 [2024-12-10T21:41:02.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71845 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iMHNnv7nNn 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iMHNnv7nNn 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.iMHNnv7nNn 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iMHNnv7nNn 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71866 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71866 /var/tmp/bdevperf.sock 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71866 ']' 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.444 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.701 [2024-12-10 21:41:02.232141] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:01.701 [2024-12-10 21:41:02.232437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71866 ] 00:14:01.701 [2024-12-10 21:41:02.387172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.701 [2024-12-10 21:41:02.420625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.701 [2024-12-10 21:41:02.450113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.959 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.959 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:01.959 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iMHNnv7nNn 00:14:02.251 21:41:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:02.533 [2024-12-10 21:41:03.061787] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.533 [2024-12-10 21:41:03.069256] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:02.533 [2024-12-10 21:41:03.069301] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:02.533 [2024-12-10 21:41:03.069355] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:02.533 [2024-12-10 21:41:03.069428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d6150 (107): Transport endpoint is not connected 00:14:02.533 [2024-12-10 21:41:03.070417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d6150 (9): Bad file descriptor 00:14:02.533 [2024-12-10 21:41:03.071415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:02.533 [2024-12-10 21:41:03.071451] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:02.533 [2024-12-10 21:41:03.071464] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:02.533 [2024-12-10 21:41:03.071480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:02.533 request: 00:14:02.533 { 00:14:02.533 "name": "TLSTEST", 00:14:02.533 "trtype": "tcp", 00:14:02.533 "traddr": "10.0.0.3", 00:14:02.533 "adrfam": "ipv4", 00:14:02.533 "trsvcid": "4420", 00:14:02.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:02.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.533 "prchk_reftag": false, 00:14:02.533 "prchk_guard": false, 00:14:02.533 "hdgst": false, 00:14:02.533 "ddgst": false, 00:14:02.533 "psk": "key0", 00:14:02.533 "allow_unrecognized_csi": false, 00:14:02.533 "method": "bdev_nvme_attach_controller", 00:14:02.533 "req_id": 1 00:14:02.533 } 00:14:02.533 Got JSON-RPC error response 00:14:02.533 response: 00:14:02.533 { 00:14:02.533 "code": -5, 00:14:02.533 "message": "Input/output error" 00:14:02.533 } 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71866 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71866 ']' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71866 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71866 00:14:02.533 killing process with pid 71866 00:14:02.533 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.533 00:14:02.533 Latency(us) 00:14:02.533 [2024-12-10T21:41:03.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.533 [2024-12-10T21:41:03.316Z] =================================================================================================================== 00:14:02.533 [2024-12-10T21:41:03.316Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71866' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71866 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71866 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71883 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71883 /var/tmp/bdevperf.sock 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71883 ']' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.533 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.792 [2024-12-10 21:41:03.332771] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:02.792 [2024-12-10 21:41:03.332872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71883 ] 00:14:02.792 [2024-12-10 21:41:03.482397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.792 [2024-12-10 21:41:03.516169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.792 [2024-12-10 21:41:03.546574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.050 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.051 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:03.051 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:03.309 [2024-12-10 21:41:03.869857] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:03.309 [2024-12-10 21:41:03.870182] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:03.309 request: 00:14:03.309 { 00:14:03.309 "name": "key0", 00:14:03.309 "path": "", 00:14:03.309 "method": "keyring_file_add_key", 00:14:03.309 "req_id": 1 00:14:03.309 } 00:14:03.309 Got JSON-RPC error response 00:14:03.309 response: 00:14:03.309 { 00:14:03.309 "code": -1, 00:14:03.309 "message": "Operation not permitted" 00:14:03.309 } 00:14:03.309 21:41:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:03.568 [2024-12-10 21:41:04.129997] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.568 [2024-12-10 21:41:04.130088] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:03.568 request: 00:14:03.568 { 00:14:03.568 "name": "TLSTEST", 00:14:03.568 "trtype": "tcp", 00:14:03.568 "traddr": "10.0.0.3", 00:14:03.568 "adrfam": "ipv4", 00:14:03.568 "trsvcid": "4420", 00:14:03.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:03.568 "prchk_reftag": false, 00:14:03.568 "prchk_guard": false, 00:14:03.568 "hdgst": false, 00:14:03.568 "ddgst": false, 00:14:03.568 "psk": "key0", 00:14:03.568 "allow_unrecognized_csi": false, 00:14:03.568 "method": "bdev_nvme_attach_controller", 00:14:03.568 "req_id": 1 00:14:03.568 } 00:14:03.568 Got JSON-RPC error response 00:14:03.568 response: 00:14:03.568 { 00:14:03.568 "code": -126, 00:14:03.568 "message": "Required key not available" 00:14:03.568 } 00:14:03.568 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71883 00:14:03.568 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71883 ']' 00:14:03.568 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71883 00:14:03.568 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:03.568 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.568 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71883 00:14:03.568 killing process with pid 71883 00:14:03.568 Received shutdown signal, test time was about 10.000000 seconds 00:14:03.568 00:14:03.568 Latency(us) 00:14:03.568 [2024-12-10T21:41:04.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.568 [2024-12-10T21:41:04.352Z] =================================================================================================================== 00:14:03.569 [2024-12-10T21:41:04.352Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71883' 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71883 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71883 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71452 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71452 ']' 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71452 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71452 00:14:03.569 killing process with pid 71452 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71452' 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71452 00:14:03.569 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71452 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.MtgUcoG7Uo 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.MtgUcoG7Uo 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71918 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71918 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71918 ']' 00:14:03.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.828 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.087 [2024-12-10 21:41:04.622429] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:04.087 [2024-12-10 21:41:04.622573] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.087 [2024-12-10 21:41:04.780290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.087 [2024-12-10 21:41:04.816711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.087 [2024-12-10 21:41:04.816777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.087 [2024-12-10 21:41:04.816792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.087 [2024-12-10 21:41:04.816801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.087 [2024-12-10 21:41:04.816810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.087 [2024-12-10 21:41:04.817170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.087 [2024-12-10 21:41:04.851418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.MtgUcoG7Uo 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MtgUcoG7Uo 00:14:04.346 21:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:04.603 [2024-12-10 21:41:05.217573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.603 21:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:04.861 21:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:05.120 [2024-12-10 21:41:05.817701] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:05.120 [2024-12-10 21:41:05.818108] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.120 21:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:05.378 malloc0 00:14:05.378 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:05.636 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:05.894 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MtgUcoG7Uo 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MtgUcoG7Uo 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71973 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71973 /var/tmp/bdevperf.sock 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71973 ']' 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.152 21:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.411 [2024-12-10 21:41:06.987339] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:06.411 [2024-12-10 21:41:06.987721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71973 ] 00:14:06.411 [2024-12-10 21:41:07.138622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.411 [2024-12-10 21:41:07.190909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.670 [2024-12-10 21:41:07.222452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.235 21:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.235 21:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:07.235 21:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:07.493 21:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:08.059 [2024-12-10 21:41:08.543260] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.059 TLSTESTn1 00:14:08.059 21:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:08.059 Running I/O for 10 seconds... 00:14:10.367 3705.00 IOPS, 14.47 MiB/s [2024-12-10T21:41:12.086Z] 3827.50 IOPS, 14.95 MiB/s [2024-12-10T21:41:13.021Z] 3836.67 IOPS, 14.99 MiB/s [2024-12-10T21:41:13.956Z] 3878.25 IOPS, 15.15 MiB/s [2024-12-10T21:41:14.892Z] 3876.80 IOPS, 15.14 MiB/s [2024-12-10T21:41:15.826Z] 3893.67 IOPS, 15.21 MiB/s [2024-12-10T21:41:16.759Z] 3902.43 IOPS, 15.24 MiB/s [2024-12-10T21:41:18.133Z] 3912.25 IOPS, 15.28 MiB/s [2024-12-10T21:41:19.068Z] 3920.11 IOPS, 15.31 MiB/s [2024-12-10T21:41:19.068Z] 3920.90 IOPS, 15.32 MiB/s 00:14:18.285 Latency(us) 00:14:18.285 [2024-12-10T21:41:19.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.285 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:18.285 Verification LBA range: start 0x0 length 0x2000 00:14:18.285 TLSTESTn1 : 10.02 3927.10 15.34 0.00 0.00 32534.84 5689.72 30742.34 00:14:18.285 [2024-12-10T21:41:19.068Z] =================================================================================================================== 00:14:18.285 [2024-12-10T21:41:19.068Z] Total : 3927.10 15.34 0.00 0.00 32534.84 5689.72 30742.34 00:14:18.285 { 00:14:18.285 "results": [ 00:14:18.285 { 00:14:18.285 "job": "TLSTESTn1", 00:14:18.285 "core_mask": "0x4", 00:14:18.285 "workload": "verify", 00:14:18.285 "status": "finished", 00:14:18.285 "verify_range": { 00:14:18.285 "start": 0, 00:14:18.285 "length": 8192 00:14:18.285 }, 00:14:18.285 "queue_depth": 128, 00:14:18.285 "io_size": 4096, 00:14:18.285 "runtime": 10.01654, 00:14:18.285 "iops": 3927.104569042803, 00:14:18.285 "mibps": 15.34025222282345, 00:14:18.285 "io_failed": 0, 00:14:18.285 "io_timeout": 0, 00:14:18.285 "avg_latency_us": 32534.838416625065, 00:14:18.285 "min_latency_us": 5689.716363636364, 00:14:18.285 "max_latency_us": 30742.34181818182 00:14:18.285 } 00:14:18.285 ], 00:14:18.285 "core_count": 1 00:14:18.285 } 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71973 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71973 ']' 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71973 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71973 00:14:18.285 killing process with pid 71973 00:14:18.285 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.285 00:14:18.285 Latency(us) 00:14:18.285 [2024-12-10T21:41:19.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.285 [2024-12-10T21:41:19.068Z] =================================================================================================================== 00:14:18.285 [2024-12-10T21:41:19.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.285 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71973' 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71973 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71973 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.MtgUcoG7Uo 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MtgUcoG7Uo 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MtgUcoG7Uo 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:18.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MtgUcoG7Uo 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MtgUcoG7Uo 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72104 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72104 /var/tmp/bdevperf.sock 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72104 ']' 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.286 21:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.286 [2024-12-10 21:41:19.029098] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:18.286 [2024-12-10 21:41:19.029720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72104 ] 00:14:18.544 [2024-12-10 21:41:19.180768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.544 [2024-12-10 21:41:19.230782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.544 [2024-12-10 21:41:19.267205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.544 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.544 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:18.544 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:19.119 [2024-12-10 21:41:19.607739] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MtgUcoG7Uo': 0100666 00:14:19.119 [2024-12-10 21:41:19.608032] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:19.119 request: 00:14:19.119 { 00:14:19.119 "name": "key0", 00:14:19.119 "path": "/tmp/tmp.MtgUcoG7Uo", 00:14:19.119 "method": "keyring_file_add_key", 00:14:19.119 "req_id": 1 00:14:19.119 } 00:14:19.119 Got JSON-RPC error response 00:14:19.119 response: 00:14:19.119 { 00:14:19.119 "code": -1, 00:14:19.119 "message": "Operation not permitted" 00:14:19.119 } 00:14:19.119 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:19.377 [2024-12-10 21:41:19.923911] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.377 [2024-12-10 21:41:19.924219] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:19.377 request: 00:14:19.377 { 00:14:19.377 "name": "TLSTEST", 00:14:19.377 "trtype": "tcp", 00:14:19.377 "traddr": "10.0.0.3", 00:14:19.377 "adrfam": "ipv4", 00:14:19.377 "trsvcid": "4420", 00:14:19.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.377 "prchk_reftag": false, 00:14:19.377 "prchk_guard": false, 00:14:19.377 "hdgst": false, 00:14:19.377 "ddgst": false, 00:14:19.377 "psk": "key0", 00:14:19.377 "allow_unrecognized_csi": false, 00:14:19.377 "method": "bdev_nvme_attach_controller", 00:14:19.377 "req_id": 1 00:14:19.377 } 00:14:19.377 Got JSON-RPC error response 00:14:19.377 response: 00:14:19.377 { 00:14:19.377 "code": -126, 00:14:19.377 "message": "Required key not available" 00:14:19.377 } 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72104 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72104 ']' 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72104 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72104 00:14:19.377 killing process with pid 72104 00:14:19.377 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.377 00:14:19.377 Latency(us) 00:14:19.377 [2024-12-10T21:41:20.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.377 [2024-12-10T21:41:20.160Z] =================================================================================================================== 00:14:19.377 [2024-12-10T21:41:20.160Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72104' 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72104 00:14:19.377 21:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72104 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71918 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71918 ']' 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71918 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71918 00:14:19.377 killing process with pid 71918 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71918' 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71918 00:14:19.377 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71918 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72136 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72136 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72136 ']' 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.655 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.656 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.656 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.656 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.656 [2024-12-10 21:41:20.354012] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:19.656 [2024-12-10 21:41:20.354128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.943 [2024-12-10 21:41:20.502509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.943 [2024-12-10 21:41:20.534400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.943 [2024-12-10 21:41:20.534636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.943 [2024-12-10 21:41:20.534669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.943 [2024-12-10 21:41:20.534680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.943 [2024-12-10 21:41:20.534692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.943 [2024-12-10 21:41:20.535039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.943 [2024-12-10 21:41:20.564589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.MtgUcoG7Uo 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.MtgUcoG7Uo 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.MtgUcoG7Uo 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MtgUcoG7Uo 00:14:19.943 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:20.201 [2024-12-10 21:41:20.904780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.201 21:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:20.459 21:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:20.717 [2024-12-10 21:41:21.488894] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:20.717 [2024-12-10 21:41:21.489357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:20.975 21:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:21.233 malloc0 00:14:21.233 21:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:21.492 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:21.750 [2024-12-10 21:41:22.367974] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MtgUcoG7Uo': 0100666 00:14:21.750 [2024-12-10 21:41:22.368248] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:21.750 request: 00:14:21.750 { 00:14:21.750 "name": "key0", 00:14:21.750 "path": "/tmp/tmp.MtgUcoG7Uo", 00:14:21.750 "method": "keyring_file_add_key", 00:14:21.750 "req_id": 1 00:14:21.750 } 00:14:21.750 Got JSON-RPC error response 00:14:21.750 response: 00:14:21.750 { 00:14:21.750 "code": -1, 00:14:21.750 "message": "Operation not permitted" 00:14:21.750 } 00:14:21.750 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:22.008 [2024-12-10 21:41:22.696082] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:22.008 [2024-12-10 21:41:22.696388] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:22.008 request: 00:14:22.008 { 00:14:22.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.008 "host": "nqn.2016-06.io.spdk:host1", 00:14:22.008 "psk": "key0", 00:14:22.008 "method": "nvmf_subsystem_add_host", 00:14:22.008 "req_id": 1 00:14:22.008 } 00:14:22.008 Got JSON-RPC error response 00:14:22.008 response: 00:14:22.008 { 00:14:22.008 "code": -32603, 00:14:22.008 "message": "Internal error" 00:14:22.008 } 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72136 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72136 ']' 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72136 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72136 00:14:22.008 killing process with pid 72136 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72136' 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72136 00:14:22.008 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72136 00:14:22.266 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.MtgUcoG7Uo 00:14:22.266 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:22.266 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.266 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.266 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.266 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72196 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72196 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72196 ']' 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.267 21:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.267 [2024-12-10 21:41:22.994022] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:22.267 [2024-12-10 21:41:22.994139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.524 [2024-12-10 21:41:23.153118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.524 [2024-12-10 21:41:23.193315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.524 [2024-12-10 21:41:23.193376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.524 [2024-12-10 21:41:23.193390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.524 [2024-12-10 21:41:23.193400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.524 [2024-12-10 21:41:23.193409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.525 [2024-12-10 21:41:23.193809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.525 [2024-12-10 21:41:23.227152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.525 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.525 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:22.525 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.525 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:22.525 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.782 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.782 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.MtgUcoG7Uo 00:14:22.782 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MtgUcoG7Uo 00:14:22.782 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:23.040 [2024-12-10 21:41:23.579102] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.040 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:23.297 21:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:23.556 [2024-12-10 21:41:24.107260] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:23.556 [2024-12-10 21:41:24.107507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:23.556 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:23.815 malloc0 00:14:23.815 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:24.073 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:24.331 21:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72250 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72250 /var/tmp/bdevperf.sock 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72250 ']' 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.588 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.588 [2024-12-10 21:41:25.307033] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:24.588 [2024-12-10 21:41:25.307319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72250 ] 00:14:24.845 [2024-12-10 21:41:25.454010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.845 [2024-12-10 21:41:25.498356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.845 [2024-12-10 21:41:25.527629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.845 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.845 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:24.845 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:25.411 21:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:25.411 [2024-12-10 21:41:26.172358] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.669 TLSTESTn1 00:14:25.669 21:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:25.927 21:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:25.927 "subsystems": [ 00:14:25.927 { 00:14:25.927 "subsystem": "keyring", 00:14:25.927 "config": [ 00:14:25.927 { 00:14:25.927 "method": "keyring_file_add_key", 00:14:25.927 "params": { 00:14:25.927 "name": "key0", 00:14:25.927 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:25.927 } 00:14:25.927 } 00:14:25.927 ] 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "subsystem": "iobuf", 00:14:25.927 "config": [ 00:14:25.927 { 00:14:25.927 "method": "iobuf_set_options", 00:14:25.927 "params": { 00:14:25.927 "small_pool_count": 8192, 00:14:25.927 "large_pool_count": 1024, 00:14:25.927 "small_bufsize": 8192, 00:14:25.927 "large_bufsize": 135168, 00:14:25.927 "enable_numa": false 00:14:25.927 } 00:14:25.927 } 00:14:25.927 ] 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "subsystem": "sock", 00:14:25.927 "config": [ 00:14:25.927 { 00:14:25.927 "method": "sock_set_default_impl", 00:14:25.927 "params": { 00:14:25.927 "impl_name": "uring" 00:14:25.927 } 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "method": "sock_impl_set_options", 00:14:25.927 "params": { 00:14:25.927 "impl_name": "ssl", 00:14:25.927 "recv_buf_size": 4096, 00:14:25.927 "send_buf_size": 4096, 00:14:25.927 "enable_recv_pipe": true, 00:14:25.927 "enable_quickack": false, 00:14:25.927 "enable_placement_id": 0, 00:14:25.927 "enable_zerocopy_send_server": true, 00:14:25.927 "enable_zerocopy_send_client": false, 00:14:25.927 "zerocopy_threshold": 0, 00:14:25.927 "tls_version": 0, 00:14:25.927 "enable_ktls": false 00:14:25.927 } 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "method": "sock_impl_set_options", 00:14:25.927 "params": { 00:14:25.927 "impl_name": "posix", 00:14:25.927 "recv_buf_size": 2097152, 00:14:25.927 "send_buf_size": 2097152, 00:14:25.927 "enable_recv_pipe": true, 00:14:25.927 "enable_quickack": false, 00:14:25.927 "enable_placement_id": 0, 00:14:25.927 "enable_zerocopy_send_server": true, 00:14:25.927 "enable_zerocopy_send_client": false, 00:14:25.927 "zerocopy_threshold": 0, 00:14:25.927 "tls_version": 0, 00:14:25.928 "enable_ktls": false 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "sock_impl_set_options", 00:14:25.928 "params": { 00:14:25.928 "impl_name": "uring", 00:14:25.928 "recv_buf_size": 2097152, 00:14:25.928 "send_buf_size": 2097152, 00:14:25.928 "enable_recv_pipe": true, 00:14:25.928 "enable_quickack": false, 00:14:25.928 "enable_placement_id": 0, 00:14:25.928 "enable_zerocopy_send_server": false, 00:14:25.928 "enable_zerocopy_send_client": false, 00:14:25.928 "zerocopy_threshold": 0, 00:14:25.928 "tls_version": 0, 00:14:25.928 "enable_ktls": false 00:14:25.928 } 00:14:25.928 } 00:14:25.928 ] 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "subsystem": "vmd", 00:14:25.928 "config": [] 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "subsystem": "accel", 00:14:25.928 "config": [ 00:14:25.928 { 00:14:25.928 "method": "accel_set_options", 00:14:25.928 "params": { 00:14:25.928 "small_cache_size": 128, 00:14:25.928 "large_cache_size": 16, 00:14:25.928 "task_count": 2048, 00:14:25.928 "sequence_count": 2048, 00:14:25.928 "buf_count": 2048 00:14:25.928 } 00:14:25.928 } 00:14:25.928 ] 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "subsystem": "bdev", 00:14:25.928 "config": [ 00:14:25.928 { 00:14:25.928 "method": "bdev_set_options", 00:14:25.928 "params": { 00:14:25.928 "bdev_io_pool_size": 65535, 00:14:25.928 "bdev_io_cache_size": 256, 00:14:25.928 "bdev_auto_examine": true, 00:14:25.928 "iobuf_small_cache_size": 128, 00:14:25.928 "iobuf_large_cache_size": 16 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "bdev_raid_set_options", 00:14:25.928 "params": { 00:14:25.928 "process_window_size_kb": 1024, 00:14:25.928 "process_max_bandwidth_mb_sec": 0 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "bdev_iscsi_set_options", 00:14:25.928 "params": { 00:14:25.928 "timeout_sec": 30 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "bdev_nvme_set_options", 00:14:25.928 "params": { 00:14:25.928 "action_on_timeout": "none", 00:14:25.928 "timeout_us": 0, 00:14:25.928 "timeout_admin_us": 0, 00:14:25.928 "keep_alive_timeout_ms": 10000, 00:14:25.928 "arbitration_burst": 0, 00:14:25.928 "low_priority_weight": 0, 00:14:25.928 "medium_priority_weight": 0, 00:14:25.928 "high_priority_weight": 0, 00:14:25.928 "nvme_adminq_poll_period_us": 10000, 00:14:25.928 "nvme_ioq_poll_period_us": 0, 00:14:25.928 "io_queue_requests": 0, 00:14:25.928 "delay_cmd_submit": true, 00:14:25.928 "transport_retry_count": 4, 00:14:25.928 "bdev_retry_count": 3, 00:14:25.928 "transport_ack_timeout": 0, 00:14:25.928 "ctrlr_loss_timeout_sec": 0, 00:14:25.928 "reconnect_delay_sec": 0, 00:14:25.928 "fast_io_fail_timeout_sec": 0, 00:14:25.928 "disable_auto_failback": false, 00:14:25.928 "generate_uuids": false, 00:14:25.928 "transport_tos": 0, 00:14:25.928 "nvme_error_stat": false, 00:14:25.928 "rdma_srq_size": 0, 00:14:25.928 "io_path_stat": false, 00:14:25.928 "allow_accel_sequence": false, 00:14:25.928 "rdma_max_cq_size": 0, 00:14:25.928 "rdma_cm_event_timeout_ms": 0, 00:14:25.928 "dhchap_digests": [ 00:14:25.928 "sha256", 00:14:25.928 "sha384", 00:14:25.928 "sha512" 00:14:25.928 ], 00:14:25.928 "dhchap_dhgroups": [ 00:14:25.928 "null", 00:14:25.928 "ffdhe2048", 00:14:25.928 "ffdhe3072", 00:14:25.928 "ffdhe4096", 00:14:25.928 "ffdhe6144", 00:14:25.928 "ffdhe8192" 00:14:25.928 ], 00:14:25.928 "rdma_umr_per_io": false 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "bdev_nvme_set_hotplug", 00:14:25.928 "params": { 00:14:25.928 "period_us": 100000, 00:14:25.928 "enable": false 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "bdev_malloc_create", 00:14:25.928 "params": { 00:14:25.928 "name": "malloc0", 00:14:25.928 "num_blocks": 8192, 00:14:25.928 "block_size": 4096, 00:14:25.928 "physical_block_size": 4096, 00:14:25.928 "uuid": "9c5a0f23-6972-4017-95a4-9d3921f8b4f8", 00:14:25.928 "optimal_io_boundary": 0, 00:14:25.928 "md_size": 0, 00:14:25.928 "dif_type": 0, 00:14:25.928 "dif_is_head_of_md": false, 00:14:25.928 "dif_pi_format": 0 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "bdev_wait_for_examine" 00:14:25.928 } 00:14:25.928 ] 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "subsystem": "nbd", 00:14:25.928 "config": [] 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "subsystem": "scheduler", 00:14:25.928 "config": [ 00:14:25.928 { 00:14:25.928 "method": "framework_set_scheduler", 00:14:25.928 "params": { 00:14:25.928 "name": "static" 00:14:25.928 } 00:14:25.928 } 00:14:25.928 ] 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "subsystem": "nvmf", 00:14:25.928 "config": [ 00:14:25.928 { 00:14:25.928 "method": "nvmf_set_config", 00:14:25.928 "params": { 00:14:25.928 "discovery_filter": "match_any", 00:14:25.928 "admin_cmd_passthru": { 00:14:25.928 "identify_ctrlr": false 00:14:25.928 }, 00:14:25.928 "dhchap_digests": [ 00:14:25.928 "sha256", 00:14:25.928 "sha384", 00:14:25.928 "sha512" 00:14:25.928 ], 00:14:25.928 "dhchap_dhgroups": [ 00:14:25.928 "null", 00:14:25.928 "ffdhe2048", 00:14:25.928 "ffdhe3072", 00:14:25.928 "ffdhe4096", 00:14:25.928 "ffdhe6144", 00:14:25.928 "ffdhe8192" 00:14:25.928 ] 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "nvmf_set_max_subsystems", 00:14:25.928 "params": { 00:14:25.928 "max_subsystems": 1024 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "nvmf_set_crdt", 00:14:25.928 "params": { 00:14:25.928 "crdt1": 0, 00:14:25.928 "crdt2": 0, 00:14:25.928 "crdt3": 0 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "nvmf_create_transport", 00:14:25.928 "params": { 00:14:25.928 "trtype": "TCP", 00:14:25.928 "max_queue_depth": 128, 00:14:25.928 "max_io_qpairs_per_ctrlr": 127, 00:14:25.928 "in_capsule_data_size": 4096, 00:14:25.928 "max_io_size": 131072, 00:14:25.928 "io_unit_size": 131072, 00:14:25.928 "max_aq_depth": 128, 00:14:25.928 "num_shared_buffers": 511, 00:14:25.928 "buf_cache_size": 4294967295, 00:14:25.928 "dif_insert_or_strip": false, 00:14:25.928 "zcopy": false, 00:14:25.928 "c2h_success": false, 00:14:25.928 "sock_priority": 0, 00:14:25.928 "abort_timeout_sec": 1, 00:14:25.928 "ack_timeout": 0, 00:14:25.928 "data_wr_pool_size": 0 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "nvmf_create_subsystem", 00:14:25.928 "params": { 00:14:25.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.928 "allow_any_host": false, 00:14:25.928 "serial_number": "SPDK00000000000001", 00:14:25.928 "model_number": "SPDK bdev Controller", 00:14:25.928 "max_namespaces": 10, 00:14:25.928 "min_cntlid": 1, 00:14:25.928 "max_cntlid": 65519, 00:14:25.928 "ana_reporting": false 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "nvmf_subsystem_add_host", 00:14:25.928 "params": { 00:14:25.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.928 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.928 "psk": "key0" 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "nvmf_subsystem_add_ns", 00:14:25.928 "params": { 00:14:25.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.928 "namespace": { 00:14:25.928 "nsid": 1, 00:14:25.928 "bdev_name": "malloc0", 00:14:25.928 "nguid": "9C5A0F236972401795A49D3921F8B4F8", 00:14:25.928 "uuid": "9c5a0f23-6972-4017-95a4-9d3921f8b4f8", 00:14:25.928 "no_auto_visible": false 00:14:25.928 } 00:14:25.928 } 00:14:25.928 }, 00:14:25.928 { 00:14:25.928 "method": "nvmf_subsystem_add_listener", 00:14:25.928 "params": { 00:14:25.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.929 "listen_address": { 00:14:25.929 "trtype": "TCP", 00:14:25.929 "adrfam": "IPv4", 00:14:25.929 "traddr": "10.0.0.3", 00:14:25.929 "trsvcid": "4420" 00:14:25.929 }, 00:14:25.929 "secure_channel": true 00:14:25.929 } 00:14:25.929 } 00:14:25.929 ] 00:14:25.929 } 00:14:25.929 ] 00:14:25.929 }' 00:14:25.929 21:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:26.495 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:26.495 "subsystems": [ 00:14:26.495 { 00:14:26.495 "subsystem": "keyring", 00:14:26.495 "config": [ 00:14:26.495 { 00:14:26.495 "method": "keyring_file_add_key", 00:14:26.495 "params": { 00:14:26.495 "name": "key0", 00:14:26.495 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:26.495 } 00:14:26.495 } 00:14:26.495 ] 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "subsystem": "iobuf", 00:14:26.495 "config": [ 00:14:26.495 { 00:14:26.495 "method": "iobuf_set_options", 00:14:26.495 "params": { 00:14:26.495 "small_pool_count": 8192, 00:14:26.495 "large_pool_count": 1024, 00:14:26.495 "small_bufsize": 8192, 00:14:26.495 "large_bufsize": 135168, 00:14:26.495 "enable_numa": false 00:14:26.495 } 00:14:26.495 } 00:14:26.495 ] 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "subsystem": "sock", 00:14:26.495 "config": [ 00:14:26.495 { 00:14:26.495 "method": "sock_set_default_impl", 00:14:26.495 "params": { 00:14:26.495 "impl_name": "uring" 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "sock_impl_set_options", 00:14:26.495 "params": { 00:14:26.495 "impl_name": "ssl", 00:14:26.495 "recv_buf_size": 4096, 00:14:26.495 "send_buf_size": 4096, 00:14:26.495 "enable_recv_pipe": true, 00:14:26.495 "enable_quickack": false, 00:14:26.495 "enable_placement_id": 0, 00:14:26.495 "enable_zerocopy_send_server": true, 00:14:26.495 "enable_zerocopy_send_client": false, 00:14:26.495 "zerocopy_threshold": 0, 00:14:26.495 "tls_version": 0, 00:14:26.495 "enable_ktls": false 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "sock_impl_set_options", 00:14:26.495 "params": { 00:14:26.495 "impl_name": "posix", 00:14:26.495 "recv_buf_size": 2097152, 00:14:26.495 "send_buf_size": 2097152, 00:14:26.495 "enable_recv_pipe": true, 00:14:26.495 "enable_quickack": false, 00:14:26.495 "enable_placement_id": 0, 00:14:26.495 "enable_zerocopy_send_server": true, 00:14:26.495 "enable_zerocopy_send_client": false, 00:14:26.495 "zerocopy_threshold": 0, 00:14:26.495 "tls_version": 0, 00:14:26.495 "enable_ktls": false 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "sock_impl_set_options", 00:14:26.495 "params": { 00:14:26.495 "impl_name": "uring", 00:14:26.495 "recv_buf_size": 2097152, 00:14:26.495 "send_buf_size": 2097152, 00:14:26.495 "enable_recv_pipe": true, 00:14:26.495 "enable_quickack": false, 00:14:26.495 "enable_placement_id": 0, 00:14:26.495 "enable_zerocopy_send_server": false, 00:14:26.495 "enable_zerocopy_send_client": false, 00:14:26.495 "zerocopy_threshold": 0, 00:14:26.495 "tls_version": 0, 00:14:26.495 "enable_ktls": false 00:14:26.495 } 00:14:26.495 } 00:14:26.495 ] 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "subsystem": "vmd", 00:14:26.495 "config": [] 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "subsystem": "accel", 00:14:26.495 "config": [ 00:14:26.495 { 00:14:26.495 "method": "accel_set_options", 00:14:26.495 "params": { 00:14:26.495 "small_cache_size": 128, 00:14:26.495 "large_cache_size": 16, 00:14:26.495 "task_count": 2048, 00:14:26.495 "sequence_count": 2048, 00:14:26.495 "buf_count": 2048 00:14:26.495 } 00:14:26.495 } 00:14:26.495 ] 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "subsystem": "bdev", 00:14:26.495 "config": [ 00:14:26.495 { 00:14:26.495 "method": "bdev_set_options", 00:14:26.495 "params": { 00:14:26.495 "bdev_io_pool_size": 65535, 00:14:26.495 "bdev_io_cache_size": 256, 00:14:26.495 "bdev_auto_examine": true, 00:14:26.495 "iobuf_small_cache_size": 128, 00:14:26.495 "iobuf_large_cache_size": 16 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "bdev_raid_set_options", 00:14:26.495 "params": { 00:14:26.495 "process_window_size_kb": 1024, 00:14:26.495 "process_max_bandwidth_mb_sec": 0 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "bdev_iscsi_set_options", 00:14:26.495 "params": { 00:14:26.495 "timeout_sec": 30 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "bdev_nvme_set_options", 00:14:26.495 "params": { 00:14:26.495 "action_on_timeout": "none", 00:14:26.495 "timeout_us": 0, 00:14:26.495 "timeout_admin_us": 0, 00:14:26.495 "keep_alive_timeout_ms": 10000, 00:14:26.495 "arbitration_burst": 0, 00:14:26.495 "low_priority_weight": 0, 00:14:26.495 "medium_priority_weight": 0, 00:14:26.495 "high_priority_weight": 0, 00:14:26.495 "nvme_adminq_poll_period_us": 10000, 00:14:26.495 "nvme_ioq_poll_period_us": 0, 00:14:26.495 "io_queue_requests": 512, 00:14:26.495 "delay_cmd_submit": true, 00:14:26.495 "transport_retry_count": 4, 00:14:26.495 "bdev_retry_count": 3, 00:14:26.495 "transport_ack_timeout": 0, 00:14:26.495 "ctrlr_loss_timeout_sec": 0, 00:14:26.495 "reconnect_delay_sec": 0, 00:14:26.495 "fast_io_fail_timeout_sec": 0, 00:14:26.495 "disable_auto_failback": false, 00:14:26.495 "generate_uuids": false, 00:14:26.495 "transport_tos": 0, 00:14:26.495 "nvme_error_stat": false, 00:14:26.495 "rdma_srq_size": 0, 00:14:26.495 "io_path_stat": false, 00:14:26.495 "allow_accel_sequence": false, 00:14:26.495 "rdma_max_cq_size": 0, 00:14:26.495 "rdma_cm_event_timeout_ms": 0, 00:14:26.495 "dhchap_digests": [ 00:14:26.495 "sha256", 00:14:26.495 "sha384", 00:14:26.495 "sha512" 00:14:26.495 ], 00:14:26.495 "dhchap_dhgroups": [ 00:14:26.495 "null", 00:14:26.495 "ffdhe2048", 00:14:26.495 "ffdhe3072", 00:14:26.495 "ffdhe4096", 00:14:26.495 "ffdhe6144", 00:14:26.495 "ffdhe8192" 00:14:26.495 ], 00:14:26.495 "rdma_umr_per_io": false 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "bdev_nvme_attach_controller", 00:14:26.495 "params": { 00:14:26.495 "name": "TLSTEST", 00:14:26.495 "trtype": "TCP", 00:14:26.495 "adrfam": "IPv4", 00:14:26.495 "traddr": "10.0.0.3", 00:14:26.495 "trsvcid": "4420", 00:14:26.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.495 "prchk_reftag": false, 00:14:26.495 "prchk_guard": false, 00:14:26.495 "ctrlr_loss_timeout_sec": 0, 00:14:26.495 "reconnect_delay_sec": 0, 00:14:26.495 "fast_io_fail_timeout_sec": 0, 00:14:26.495 "psk": "key0", 00:14:26.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.495 "hdgst": false, 00:14:26.495 "ddgst": false, 00:14:26.495 "multipath": "multipath" 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "bdev_nvme_set_hotplug", 00:14:26.495 "params": { 00:14:26.495 "period_us": 100000, 00:14:26.495 "enable": false 00:14:26.495 } 00:14:26.495 }, 00:14:26.495 { 00:14:26.495 "method": "bdev_wait_for_examine" 00:14:26.495 } 00:14:26.495 ] 00:14:26.495 }, 00:14:26.496 { 00:14:26.496 "subsystem": "nbd", 00:14:26.496 "config": [] 00:14:26.496 } 00:14:26.496 ] 00:14:26.496 }' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72250 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72250 ']' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72250 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72250 00:14:26.496 killing process with pid 72250 00:14:26.496 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.496 00:14:26.496 Latency(us) 00:14:26.496 [2024-12-10T21:41:27.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.496 [2024-12-10T21:41:27.279Z] =================================================================================================================== 00:14:26.496 [2024-12-10T21:41:27.279Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72250' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72250 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72250 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72196 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72196 ']' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72196 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72196 00:14:26.496 killing process with pid 72196 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72196' 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72196 00:14:26.496 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72196 00:14:26.754 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:26.754 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.754 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:26.754 "subsystems": [ 00:14:26.754 { 00:14:26.754 "subsystem": "keyring", 00:14:26.754 "config": [ 00:14:26.754 { 00:14:26.754 "method": "keyring_file_add_key", 00:14:26.754 "params": { 00:14:26.754 "name": "key0", 00:14:26.754 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:26.754 } 00:14:26.754 } 00:14:26.754 ] 00:14:26.754 }, 00:14:26.754 { 00:14:26.754 "subsystem": "iobuf", 00:14:26.754 "config": [ 00:14:26.754 { 00:14:26.754 "method": "iobuf_set_options", 00:14:26.754 "params": { 00:14:26.754 "small_pool_count": 8192, 00:14:26.754 "large_pool_count": 1024, 00:14:26.754 "small_bufsize": 8192, 00:14:26.754 "large_bufsize": 135168, 00:14:26.754 "enable_numa": false 00:14:26.754 } 00:14:26.754 } 00:14:26.754 ] 00:14:26.754 }, 00:14:26.754 { 00:14:26.754 "subsystem": "sock", 00:14:26.754 "config": [ 00:14:26.754 { 00:14:26.754 "method": "sock_set_default_impl", 00:14:26.754 "params": { 00:14:26.754 "impl_name": "uring" 00:14:26.754 } 00:14:26.754 }, 00:14:26.754 { 00:14:26.754 "method": "sock_impl_set_options", 00:14:26.754 "params": { 00:14:26.754 "impl_name": "ssl", 00:14:26.754 "recv_buf_size": 4096, 00:14:26.754 "send_buf_size": 4096, 00:14:26.754 "enable_recv_pipe": true, 00:14:26.754 "enable_quickack": false, 00:14:26.754 "enable_placement_id": 0, 00:14:26.754 "enable_zerocopy_send_server": true, 00:14:26.754 "enable_zerocopy_send_client": false, 00:14:26.754 "zerocopy_threshold": 0, 00:14:26.754 "tls_version": 0, 00:14:26.754 "enable_ktls": false 00:14:26.754 } 00:14:26.754 }, 00:14:26.754 { 00:14:26.754 "method": "sock_impl_set_options", 00:14:26.754 "params": { 00:14:26.754 "impl_name": "posix", 00:14:26.754 "recv_buf_size": 2097152, 00:14:26.754 "send_buf_size": 2097152, 00:14:26.754 "enable_recv_pipe": true, 00:14:26.754 "enable_quickack": false, 00:14:26.754 "enable_placement_id": 0, 00:14:26.754 "enable_zerocopy_send_server": true, 00:14:26.754 "enable_zerocopy_send_client": false, 00:14:26.754 "zerocopy_threshold": 0, 00:14:26.754 "tls_version": 0, 00:14:26.754 "enable_ktls": false 00:14:26.754 } 00:14:26.754 }, 00:14:26.754 { 00:14:26.754 "method": "sock_impl_set_options", 00:14:26.754 "params": { 00:14:26.754 "impl_name": "uring", 00:14:26.754 "recv_buf_size": 2097152, 00:14:26.754 "send_buf_size": 2097152, 00:14:26.754 "enable_recv_pipe": true, 00:14:26.754 "enable_quickack": false, 00:14:26.754 "enable_placement_id": 0, 00:14:26.754 "enable_zerocopy_send_server": false, 00:14:26.754 "enable_zerocopy_send_client": false, 00:14:26.754 "zerocopy_threshold": 0, 00:14:26.754 "tls_version": 0, 00:14:26.754 "enable_ktls": false 00:14:26.754 } 00:14:26.754 } 00:14:26.754 ] 00:14:26.754 }, 00:14:26.754 { 00:14:26.754 "subsystem": "vmd", 00:14:26.754 "config": [] 00:14:26.754 }, 00:14:26.754 { 00:14:26.754 "subsystem": "accel", 00:14:26.755 "config": [ 00:14:26.755 { 00:14:26.755 "method": "accel_set_options", 00:14:26.755 "params": { 00:14:26.755 "small_cache_size": 128, 00:14:26.755 "large_cache_size": 16, 00:14:26.755 "task_count": 2048, 00:14:26.755 "sequence_count": 2048, 00:14:26.755 "buf_count": 2048 00:14:26.755 } 00:14:26.755 } 00:14:26.755 ] 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "subsystem": "bdev", 00:14:26.755 "config": [ 00:14:26.755 { 00:14:26.755 "method": "bdev_set_options", 00:14:26.755 "params": { 00:14:26.755 "bdev_io_pool_size": 65535, 00:14:26.755 "bdev_io_cache_size": 256, 00:14:26.755 "bdev_auto_examine": true, 00:14:26.755 "iobuf_small_cache_size": 128, 00:14:26.755 "iobuf_large_cache_size": 16 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "bdev_raid_set_options", 00:14:26.755 "params": { 00:14:26.755 "process_window_size_kb": 1024, 00:14:26.755 "process_max_bandwidth_mb_sec": 0 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "bdev_iscsi_set_options", 00:14:26.755 "params": { 00:14:26.755 "timeout_sec": 30 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "bdev_nvme_set_options", 00:14:26.755 "params": { 00:14:26.755 "action_on_timeout": "none", 00:14:26.755 "timeout_us": 0, 00:14:26.755 "timeout_admin_us": 0, 00:14:26.755 "keep_alive_timeout_ms": 10000, 00:14:26.755 "arbitration_burst": 0, 00:14:26.755 "low_priority_weight": 0, 00:14:26.755 "medium_priority_weight": 0, 00:14:26.755 "high_priority_weight": 0, 00:14:26.755 "nvme_adminq_poll_period_us": 10000, 00:14:26.755 "nvme_ioq_poll_period_us": 0, 00:14:26.755 "io_queue_requests": 0, 00:14:26.755 "delay_cmd_submit": true, 00:14:26.755 "transport_retry_count": 4, 00:14:26.755 "bdev_retry_count": 3, 00:14:26.755 "transport_ack_timeout": 0, 00:14:26.755 "ctrlr_loss_timeout_sec": 0, 00:14:26.755 "reconnect_delay_sec": 0, 00:14:26.755 "fast_io_fail_timeout_sec": 0, 00:14:26.755 "disable_auto_failback": false, 00:14:26.755 "generate_uuids": false, 00:14:26.755 "transport_tos": 0, 00:14:26.755 "nvme_error_stat": false, 00:14:26.755 "rdma_srq_size": 0, 00:14:26.755 "io_path_stat": false, 00:14:26.755 "allow_accel_sequence": false, 00:14:26.755 "rdma_max_cq_size": 0, 00:14:26.755 "rdma_cm_event_timeout_ms": 0, 00:14:26.755 "dhchap_digests": [ 00:14:26.755 "sha256", 00:14:26.755 "sha384", 00:14:26.755 "sha512" 00:14:26.755 ], 00:14:26.755 "dhchap_dhgroups": [ 00:14:26.755 "null", 00:14:26.755 "ffdhe2048", 00:14:26.755 "ffdhe3072", 00:14:26.755 "ffdhe4096", 00:14:26.755 "ffdhe6144", 00:14:26.755 "ffdhe8192" 00:14:26.755 ], 00:14:26.755 "rdma_umr_per_io": false 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "bdev_nvme_set_hotplug", 00:14:26.755 "params": { 00:14:26.755 "period_us": 100000, 00:14:26.755 "enable": false 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "bdev_malloc_create", 00:14:26.755 "params": { 00:14:26.755 "name": "malloc0", 00:14:26.755 "num_blocks": 8192, 00:14:26.755 "block_size": 4096, 00:14:26.755 "physical_block_size": 4096, 00:14:26.755 "uuid": "9c5a0f23-6972-4017-95a4-9d3921f8b4f8", 00:14:26.755 "optimal_io_boundary": 0, 00:14:26.755 "md_size": 0, 00:14:26.755 "dif_type": 0, 00:14:26.755 "dif_is_head_of_md": false, 00:14:26.755 "dif_pi_format": 0 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "bdev_wait_for_examine" 00:14:26.755 } 00:14:26.755 ] 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "subsystem": "nbd", 00:14:26.755 "config": [] 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "subsystem": "scheduler", 00:14:26.755 "config": [ 00:14:26.755 { 00:14:26.755 "method": "framework_set_scheduler", 00:14:26.755 "params": { 00:14:26.755 "name": "static" 00:14:26.755 } 00:14:26.755 } 00:14:26.755 ] 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "subsystem": "nvmf", 00:14:26.755 "config": [ 00:14:26.755 { 00:14:26.755 "method": "nvmf_set_config", 00:14:26.755 "params": { 00:14:26.755 "discovery_filter": "match_any", 00:14:26.755 "admin_cmd_passthru": { 00:14:26.755 "identify_ctrlr": false 00:14:26.755 }, 00:14:26.755 "dhchap_digests": [ 00:14:26.755 "sha256", 00:14:26.755 "sha384", 00:14:26.755 "sha512" 00:14:26.755 ], 00:14:26.755 "dhchap_dhgroups": [ 00:14:26.755 "null", 00:14:26.755 "ffdhe2048", 00:14:26.755 "ffdhe3072", 00:14:26.755 "ffdhe4096", 00:14:26.755 "ffdhe6144", 00:14:26.755 "ffdhe8192" 00:14:26.755 ] 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "nvmf_set_max_subsystems", 00:14:26.755 "params": { 00:14:26.755 "max_subsystems": 1024 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "nvmf_set_crdt", 00:14:26.755 "params": { 00:14:26.755 "crdt1": 0, 00:14:26.755 "crdt2": 0, 00:14:26.755 "crdt3": 0 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "nvmf_create_transport", 00:14:26.755 "params": { 00:14:26.755 "trtype": "TCP", 00:14:26.755 "max_queue_depth": 128, 00:14:26.755 "max_io_qpairs_per_ctrlr": 127, 00:14:26.755 "in_capsule_data_size": 4096, 00:14:26.755 "max_io_size": 131072, 00:14:26.755 "io_unit_size": 131072, 00:14:26.755 "max_aq_depth": 128, 00:14:26.755 "num_shared_buffers": 511, 00:14:26.755 "buf_cache_size": 4294967295, 00:14:26.755 "dif_insert_or_strip": false, 00:14:26.755 "zcopy": false, 00:14:26.755 "c2h_success": false, 00:14:26.755 "sock_priority": 0, 00:14:26.755 "abort_timeout_sec": 1, 00:14:26.755 "ack_timeout": 0, 00:14:26.755 "data_wr_pool_size": 0 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "nvmf_create_subsystem", 00:14:26.755 "params": { 00:14:26.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.755 "allow_any_host": false, 00:14:26.755 "serial_number": "SPDK00000000000001", 00:14:26.755 "model_number": "SPDK bdev Controller", 00:14:26.755 "max_namespaces": 10, 00:14:26.755 "min_cntlid": 1, 00:14:26.755 "max_cntlid": 65519, 00:14:26.755 "ana_reporting": false 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "nvmf_subsystem_add_host", 00:14:26.755 "params": { 00:14:26.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.755 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.755 "psk": "key0" 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "nvmf_subsystem_add_ns", 00:14:26.755 "params": { 00:14:26.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.755 "namespace": { 00:14:26.755 "nsid": 1, 00:14:26.755 "bdev_name": "malloc0", 00:14:26.755 "nguid": "9C5A0F236972401795A49D3921F8B4F8", 00:14:26.755 "uuid": "9c5a0f23-6972-4017-95a4-9d3921f8b4f8", 00:14:26.755 "no_auto_visible": false 00:14:26.755 } 00:14:26.755 } 00:14:26.755 }, 00:14:26.755 { 00:14:26.755 "method": "nvmf_subsystem_add_listener", 00:14:26.755 "params": { 00:14:26.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.755 "listen_address": { 00:14:26.755 "trtype": "TCP", 00:14:26.755 "adrfam": "IPv4", 00:14:26.755 "traddr": "10.0.0.3", 00:14:26.755 "trsvcid": "4420" 00:14:26.755 }, 00:14:26.755 "secure_channel": true 00:14:26.755 } 00:14:26.755 } 00:14:26.755 ] 00:14:26.755 } 00:14:26.755 ] 00:14:26.755 }' 00:14:26.755 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.755 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.755 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:26.755 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72292 00:14:26.755 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72292 00:14:26.755 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72292 ']' 00:14:26.755 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.756 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.756 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.756 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.756 21:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.756 [2024-12-10 21:41:27.475069] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:26.756 [2024-12-10 21:41:27.476229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.014 [2024-12-10 21:41:27.623064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.014 [2024-12-10 21:41:27.655303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.014 [2024-12-10 21:41:27.655574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.014 [2024-12-10 21:41:27.655786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.014 [2024-12-10 21:41:27.655944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.014 [2024-12-10 21:41:27.655958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.014 [2024-12-10 21:41:27.656325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.272 [2024-12-10 21:41:27.803813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:27.272 [2024-12-10 21:41:27.863309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.272 [2024-12-10 21:41:27.895239] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:27.272 [2024-12-10 21:41:27.895530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72324 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72324 /var/tmp/bdevperf.sock 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72324 ']' 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:27.838 21:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:27.838 "subsystems": [ 00:14:27.838 { 00:14:27.838 "subsystem": "keyring", 00:14:27.838 "config": [ 00:14:27.838 { 00:14:27.838 "method": "keyring_file_add_key", 00:14:27.838 "params": { 00:14:27.838 "name": "key0", 00:14:27.838 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:27.838 } 00:14:27.838 } 00:14:27.838 ] 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "subsystem": "iobuf", 00:14:27.838 "config": [ 00:14:27.838 { 00:14:27.838 "method": "iobuf_set_options", 00:14:27.838 "params": { 00:14:27.838 "small_pool_count": 8192, 00:14:27.838 "large_pool_count": 1024, 00:14:27.838 "small_bufsize": 8192, 00:14:27.838 "large_bufsize": 135168, 00:14:27.838 "enable_numa": false 00:14:27.838 } 00:14:27.838 } 00:14:27.838 ] 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "subsystem": "sock", 00:14:27.838 "config": [ 00:14:27.838 { 00:14:27.838 "method": "sock_set_default_impl", 00:14:27.838 "params": { 00:14:27.838 "impl_name": "uring" 00:14:27.838 } 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "method": "sock_impl_set_options", 00:14:27.838 "params": { 00:14:27.838 "impl_name": "ssl", 00:14:27.838 "recv_buf_size": 4096, 00:14:27.838 "send_buf_size": 4096, 00:14:27.838 "enable_recv_pipe": true, 00:14:27.838 "enable_quickack": false, 00:14:27.838 "enable_placement_id": 0, 00:14:27.838 "enable_zerocopy_send_server": true, 00:14:27.838 "enable_zerocopy_send_client": false, 00:14:27.838 "zerocopy_threshold": 0, 00:14:27.838 "tls_version": 0, 00:14:27.838 "enable_ktls": false 00:14:27.838 } 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "method": "sock_impl_set_options", 00:14:27.838 "params": { 00:14:27.838 "impl_name": "posix", 00:14:27.838 "recv_buf_size": 2097152, 00:14:27.838 "send_buf_size": 2097152, 00:14:27.838 "enable_recv_pipe": true, 00:14:27.838 "enable_quickack": false, 00:14:27.838 "enable_placement_id": 0, 00:14:27.838 "enable_zerocopy_send_server": true, 00:14:27.838 "enable_zerocopy_send_client": false, 00:14:27.838 "zerocopy_threshold": 0, 00:14:27.838 "tls_version": 0, 00:14:27.838 "enable_ktls": false 00:14:27.838 } 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "method": "sock_impl_set_options", 00:14:27.838 "params": { 00:14:27.838 "impl_name": "uring", 00:14:27.838 "recv_buf_size": 2097152, 00:14:27.838 "send_buf_size": 2097152, 00:14:27.838 "enable_recv_pipe": true, 00:14:27.838 "enable_quickack": false, 00:14:27.838 "enable_placement_id": 0, 00:14:27.838 "enable_zerocopy_send_server": false, 00:14:27.838 "enable_zerocopy_send_client": false, 00:14:27.838 "zerocopy_threshold": 0, 00:14:27.838 "tls_version": 0, 00:14:27.838 "enable_ktls": false 00:14:27.838 } 00:14:27.838 } 00:14:27.838 ] 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "subsystem": "vmd", 00:14:27.838 "config": [] 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "subsystem": "accel", 00:14:27.838 "config": [ 00:14:27.838 { 00:14:27.838 "method": "accel_set_options", 00:14:27.838 "params": { 00:14:27.838 "small_cache_size": 128, 00:14:27.838 "large_cache_size": 16, 00:14:27.838 "task_count": 2048, 00:14:27.838 "sequence_count": 2048, 00:14:27.838 "buf_count": 2048 00:14:27.838 } 00:14:27.838 } 00:14:27.838 ] 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "subsystem": "bdev", 00:14:27.838 "config": [ 00:14:27.838 { 00:14:27.838 "method": "bdev_set_options", 00:14:27.838 "params": { 00:14:27.838 "bdev_io_pool_size": 65535, 00:14:27.838 "bdev_io_cache_size": 256, 00:14:27.838 "bdev_auto_examine": true, 00:14:27.838 "iobuf_small_cache_size": 128, 00:14:27.838 "iobuf_large_cache_size": 16 00:14:27.838 } 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "method": "bdev_raid_set_options", 00:14:27.838 "params": { 00:14:27.838 "process_window_size_kb": 1024, 00:14:27.838 "process_max_bandwidth_mb_sec": 0 00:14:27.838 } 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "method": "bdev_iscsi_set_options", 00:14:27.838 "params": { 00:14:27.838 "timeout_sec": 30 00:14:27.838 } 00:14:27.838 }, 00:14:27.838 { 00:14:27.838 "method": "bdev_nvme_set_options", 00:14:27.838 "params": { 00:14:27.838 "action_on_timeout": "none", 00:14:27.838 "timeout_us": 0, 00:14:27.838 "timeout_admin_us": 0, 00:14:27.838 "keep_alive_timeout_ms": 10000, 00:14:27.838 "arbitration_burst": 0, 00:14:27.838 "low_priority_weight": 0, 00:14:27.838 "medium_priority_weight": 0, 00:14:27.838 "high_priority_weight": 0, 00:14:27.838 "nvme_adminq_poll_period_us": 10000, 00:14:27.838 "nvme_ioq_poll_period_us": 0, 00:14:27.838 "io_queue_requests": 512, 00:14:27.838 "delay_cmd_submit": true, 00:14:27.838 "transport_retry_count": 4, 00:14:27.838 "bdev_retry_count": 3, 00:14:27.838 "transport_ack_timeout": 0, 00:14:27.838 "ctrlr_loss_timeout_sec": 0, 00:14:27.838 "reconnect_delay_sec": 0, 00:14:27.838 "fast_io_fail_timeout_sec": 0, 00:14:27.838 "disable_auto_failback": false, 00:14:27.838 "generate_uuids": false, 00:14:27.838 "transport_tos": 0, 00:14:27.838 "nvme_error_stat": false, 00:14:27.838 "rdma_srq_size": 0, 00:14:27.838 "io_path_stat": false, 00:14:27.838 "allow_accel_sequence": false, 00:14:27.839 "rdma_max_cq_size": 0, 00:14:27.839 "rdma_cm_event_timeout_ms": 0, 00:14:27.839 "dhchap_digests": [ 00:14:27.839 "sha256", 00:14:27.839 "sha384", 00:14:27.839 "sha512" 00:14:27.839 ], 00:14:27.839 "dhchap_dhgroups": [ 00:14:27.839 "null", 00:14:27.839 "ffdhe2048", 00:14:27.839 "ffdhe3072", 00:14:27.839 "ffdhe4096", 00:14:27.839 "ffdhe6144", 00:14:27.839 "ffdhe8192" 00:14:27.839 ], 00:14:27.839 "rdma_umr_per_io": false 00:14:27.839 } 00:14:27.839 }, 00:14:27.839 { 00:14:27.839 "method": "bdev_nvme_attach_controller", 00:14:27.839 "params": { 00:14:27.839 "name": "TLSTEST", 00:14:27.839 "trtype": "TCP", 00:14:27.839 "adrfam": "IPv4", 00:14:27.839 "traddr": "10.0.0.3", 00:14:27.839 "trsvcid": "4420", 00:14:27.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.839 "prchk_reftag": false, 00:14:27.839 "prchk_guard": false, 00:14:27.839 "ctrlr_loss_timeout_sec": 0, 00:14:27.839 "reconnect_delay_sec": 0, 00:14:27.839 "fast_io_fail_timeout_sec": 0, 00:14:27.839 "psk": "key0", 00:14:27.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.839 "hdgst": false, 00:14:27.839 "ddgst": false, 00:14:27.839 "multipath": "multipath" 00:14:27.839 } 00:14:27.839 }, 00:14:27.839 { 00:14:27.839 "method": "bdev_nvme_set_hotplug", 00:14:27.839 "params": { 00:14:27.839 "period_us": 100000, 00:14:27.839 "enable": false 00:14:27.839 } 00:14:27.839 }, 00:14:27.839 { 00:14:27.839 "method": "bdev_wait_for_examine" 00:14:27.839 } 00:14:27.839 ] 00:14:27.839 }, 00:14:27.839 { 00:14:27.839 "subsystem": "nbd", 00:14:27.839 "config": [] 00:14:27.839 } 00:14:27.839 ] 00:14:27.839 }' 00:14:27.839 [2024-12-10 21:41:28.610027] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:27.839 [2024-12-10 21:41:28.610761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72324 ] 00:14:28.098 [2024-12-10 21:41:28.766150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.098 [2024-12-10 21:41:28.804864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.361 [2024-12-10 21:41:28.919174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.361 [2024-12-10 21:41:28.954405] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.937 21:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.937 21:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:28.937 21:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:29.195 Running I/O for 10 seconds... 00:14:31.503 3959.00 IOPS, 15.46 MiB/s [2024-12-10T21:41:33.221Z] 3885.50 IOPS, 15.18 MiB/s [2024-12-10T21:41:34.154Z] 3873.67 IOPS, 15.13 MiB/s [2024-12-10T21:41:35.090Z] 3907.75 IOPS, 15.26 MiB/s [2024-12-10T21:41:36.023Z] 3844.80 IOPS, 15.02 MiB/s [2024-12-10T21:41:36.958Z] 3872.00 IOPS, 15.12 MiB/s [2024-12-10T21:41:37.891Z] 3839.57 IOPS, 15.00 MiB/s [2024-12-10T21:41:39.290Z] 3861.88 IOPS, 15.09 MiB/s [2024-12-10T21:41:40.223Z] 3879.89 IOPS, 15.16 MiB/s [2024-12-10T21:41:40.223Z] 3899.10 IOPS, 15.23 MiB/s 00:14:39.440 Latency(us) 00:14:39.440 [2024-12-10T21:41:40.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.440 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:39.440 Verification LBA range: start 0x0 length 0x2000 00:14:39.441 TLSTESTn1 : 10.02 3905.34 15.26 0.00 0.00 32715.15 5868.45 43372.92 00:14:39.441 [2024-12-10T21:41:40.224Z] =================================================================================================================== 00:14:39.441 [2024-12-10T21:41:40.224Z] Total : 3905.34 15.26 0.00 0.00 32715.15 5868.45 43372.92 00:14:39.441 { 00:14:39.441 "results": [ 00:14:39.441 { 00:14:39.441 "job": "TLSTESTn1", 00:14:39.441 "core_mask": "0x4", 00:14:39.441 "workload": "verify", 00:14:39.441 "status": "finished", 00:14:39.441 "verify_range": { 00:14:39.441 "start": 0, 00:14:39.441 "length": 8192 00:14:39.441 }, 00:14:39.441 "queue_depth": 128, 00:14:39.441 "io_size": 4096, 00:14:39.441 "runtime": 10.016039, 00:14:39.441 "iops": 3905.3362312187483, 00:14:39.441 "mibps": 15.255219653198235, 00:14:39.441 "io_failed": 0, 00:14:39.441 "io_timeout": 0, 00:14:39.441 "avg_latency_us": 32715.150180442324, 00:14:39.441 "min_latency_us": 5868.450909090909, 00:14:39.441 "max_latency_us": 43372.91636363637 00:14:39.441 } 00:14:39.441 ], 00:14:39.441 "core_count": 1 00:14:39.441 } 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72324 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72324 ']' 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72324 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72324 00:14:39.441 killing process with pid 72324 00:14:39.441 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.441 00:14:39.441 Latency(us) 00:14:39.441 [2024-12-10T21:41:40.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.441 [2024-12-10T21:41:40.224Z] =================================================================================================================== 00:14:39.441 [2024-12-10T21:41:40.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72324' 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72324 00:14:39.441 21:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72324 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72292 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72292 ']' 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72292 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72292 00:14:39.441 killing process with pid 72292 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72292' 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72292 00:14:39.441 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72292 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72459 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72459 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72459 ']' 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.699 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 [2024-12-10 21:41:40.348767] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:39.699 [2024-12-10 21:41:40.349163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.957 [2024-12-10 21:41:40.511486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.957 [2024-12-10 21:41:40.551419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.958 [2024-12-10 21:41:40.551693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.958 [2024-12-10 21:41:40.551885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.958 [2024-12-10 21:41:40.552069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.958 [2024-12-10 21:41:40.552086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.958 [2024-12-10 21:41:40.552438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.958 [2024-12-10 21:41:40.587209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.MtgUcoG7Uo 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MtgUcoG7Uo 00:14:39.958 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.216 [2024-12-10 21:41:40.958716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.216 21:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:40.782 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:41.041 [2024-12-10 21:41:41.622851] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.041 [2024-12-10 21:41:41.623103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:41.041 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.300 malloc0 00:14:41.300 21:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:41.557 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:41.814 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:42.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72513 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72513 /var/tmp/bdevperf.sock 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72513 ']' 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.072 21:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.072 [2024-12-10 21:41:42.779391] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:42.072 [2024-12-10 21:41:42.780104] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72513 ] 00:14:42.330 [2024-12-10 21:41:42.921866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.330 [2024-12-10 21:41:42.954803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.330 [2024-12-10 21:41:42.984363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.330 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.330 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:42.330 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:42.588 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:42.845 [2024-12-10 21:41:43.604117] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.103 nvme0n1 00:14:43.103 21:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.103 Running I/O for 1 seconds... 00:14:44.474 3902.00 IOPS, 15.24 MiB/s 00:14:44.474 Latency(us) 00:14:44.474 [2024-12-10T21:41:45.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.474 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:44.474 Verification LBA range: start 0x0 length 0x2000 00:14:44.474 nvme0n1 : 1.04 3890.36 15.20 0.00 0.00 32324.55 7328.12 26095.24 00:14:44.474 [2024-12-10T21:41:45.257Z] =================================================================================================================== 00:14:44.474 [2024-12-10T21:41:45.257Z] Total : 3890.36 15.20 0.00 0.00 32324.55 7328.12 26095.24 00:14:44.474 { 00:14:44.474 "results": [ 00:14:44.474 { 00:14:44.474 "job": "nvme0n1", 00:14:44.474 "core_mask": "0x2", 00:14:44.474 "workload": "verify", 00:14:44.474 "status": "finished", 00:14:44.474 "verify_range": { 00:14:44.474 "start": 0, 00:14:44.474 "length": 8192 00:14:44.474 }, 00:14:44.474 "queue_depth": 128, 00:14:44.474 "io_size": 4096, 00:14:44.474 "runtime": 1.03615, 00:14:44.474 "iops": 3890.3633643777443, 00:14:44.474 "mibps": 15.196731892100564, 00:14:44.474 "io_failed": 0, 00:14:44.474 "io_timeout": 0, 00:14:44.474 "avg_latency_us": 32324.551951467045, 00:14:44.474 "min_latency_us": 7328.1163636363635, 00:14:44.474 "max_latency_us": 26095.243636363637 00:14:44.474 } 00:14:44.474 ], 00:14:44.474 "core_count": 1 00:14:44.474 } 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72513 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72513 ']' 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72513 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72513 00:14:44.474 killing process with pid 72513 00:14:44.474 Received shutdown signal, test time was about 1.000000 seconds 00:14:44.474 00:14:44.474 Latency(us) 00:14:44.474 [2024-12-10T21:41:45.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.474 [2024-12-10T21:41:45.257Z] =================================================================================================================== 00:14:44.474 [2024-12-10T21:41:45.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72513' 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72513 00:14:44.474 21:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72513 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72459 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72459 ']' 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72459 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72459 00:14:44.474 killing process with pid 72459 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72459' 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72459 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72459 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72551 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72551 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72551 ']' 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.474 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.731 [2024-12-10 21:41:45.277876] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:44.731 [2024-12-10 21:41:45.278338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.731 [2024-12-10 21:41:45.421877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.731 [2024-12-10 21:41:45.453766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.731 [2024-12-10 21:41:45.453819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.731 [2024-12-10 21:41:45.453831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.731 [2024-12-10 21:41:45.453840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.731 [2024-12-10 21:41:45.453847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.731 [2024-12-10 21:41:45.454142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.731 [2024-12-10 21:41:45.484764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.989 [2024-12-10 21:41:45.589102] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.989 malloc0 00:14:44.989 [2024-12-10 21:41:45.615966] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.989 [2024-12-10 21:41:45.616385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:44.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72576 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72576 /var/tmp/bdevperf.sock 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72576 ']' 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.989 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.989 [2024-12-10 21:41:45.699986] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:44.989 [2024-12-10 21:41:45.700341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72576 ] 00:14:45.247 [2024-12-10 21:41:45.853770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.247 [2024-12-10 21:41:45.893107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.247 [2024-12-10 21:41:45.928087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.247 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.247 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:45.247 21:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MtgUcoG7Uo 00:14:45.812 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:46.069 [2024-12-10 21:41:46.619109] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.069 nvme0n1 00:14:46.069 21:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.069 Running I/O for 1 seconds... 00:14:47.445 3892.00 IOPS, 15.20 MiB/s 00:14:47.445 Latency(us) 00:14:47.445 [2024-12-10T21:41:48.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.445 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.445 Verification LBA range: start 0x0 length 0x2000 00:14:47.445 nvme0n1 : 1.02 3929.96 15.35 0.00 0.00 32117.08 5749.29 19779.96 00:14:47.445 [2024-12-10T21:41:48.228Z] =================================================================================================================== 00:14:47.445 [2024-12-10T21:41:48.228Z] Total : 3929.96 15.35 0.00 0.00 32117.08 5749.29 19779.96 00:14:47.445 { 00:14:47.445 "results": [ 00:14:47.445 { 00:14:47.445 "job": "nvme0n1", 00:14:47.445 "core_mask": "0x2", 00:14:47.445 "workload": "verify", 00:14:47.445 "status": "finished", 00:14:47.445 "verify_range": { 00:14:47.445 "start": 0, 00:14:47.445 "length": 8192 00:14:47.445 }, 00:14:47.445 "queue_depth": 128, 00:14:47.445 "io_size": 4096, 00:14:47.445 "runtime": 1.02291, 00:14:47.445 "iops": 3929.9645130070094, 00:14:47.445 "mibps": 15.35142387893363, 00:14:47.445 "io_failed": 0, 00:14:47.445 "io_timeout": 0, 00:14:47.445 "avg_latency_us": 32117.075017639076, 00:14:47.445 "min_latency_us": 5749.294545454545, 00:14:47.445 "max_latency_us": 19779.956363636364 00:14:47.445 } 00:14:47.445 ], 00:14:47.445 "core_count": 1 00:14:47.445 } 00:14:47.445 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:47.445 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.445 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.445 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.445 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:47.445 "subsystems": [ 00:14:47.445 { 00:14:47.445 "subsystem": "keyring", 00:14:47.445 "config": [ 00:14:47.445 { 00:14:47.445 "method": "keyring_file_add_key", 00:14:47.445 "params": { 00:14:47.445 "name": "key0", 00:14:47.445 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:47.445 } 00:14:47.446 } 00:14:47.446 ] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "iobuf", 00:14:47.446 "config": [ 00:14:47.446 { 00:14:47.446 "method": "iobuf_set_options", 00:14:47.446 "params": { 00:14:47.446 "small_pool_count": 8192, 00:14:47.446 "large_pool_count": 1024, 00:14:47.446 "small_bufsize": 8192, 00:14:47.446 "large_bufsize": 135168, 00:14:47.446 "enable_numa": false 00:14:47.446 } 00:14:47.446 } 00:14:47.446 ] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "sock", 00:14:47.446 "config": [ 00:14:47.446 { 00:14:47.446 "method": "sock_set_default_impl", 00:14:47.446 "params": { 00:14:47.446 "impl_name": "uring" 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "sock_impl_set_options", 00:14:47.446 "params": { 00:14:47.446 "impl_name": "ssl", 00:14:47.446 "recv_buf_size": 4096, 00:14:47.446 "send_buf_size": 4096, 00:14:47.446 "enable_recv_pipe": true, 00:14:47.446 "enable_quickack": false, 00:14:47.446 "enable_placement_id": 0, 00:14:47.446 "enable_zerocopy_send_server": true, 00:14:47.446 "enable_zerocopy_send_client": false, 00:14:47.446 "zerocopy_threshold": 0, 00:14:47.446 "tls_version": 0, 00:14:47.446 "enable_ktls": false 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "sock_impl_set_options", 00:14:47.446 "params": { 00:14:47.446 "impl_name": "posix", 00:14:47.446 "recv_buf_size": 2097152, 00:14:47.446 "send_buf_size": 2097152, 00:14:47.446 "enable_recv_pipe": true, 00:14:47.446 "enable_quickack": false, 00:14:47.446 "enable_placement_id": 0, 00:14:47.446 "enable_zerocopy_send_server": true, 00:14:47.446 "enable_zerocopy_send_client": false, 00:14:47.446 "zerocopy_threshold": 0, 00:14:47.446 "tls_version": 0, 00:14:47.446 "enable_ktls": false 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "sock_impl_set_options", 00:14:47.446 "params": { 00:14:47.446 "impl_name": "uring", 00:14:47.446 "recv_buf_size": 2097152, 00:14:47.446 "send_buf_size": 2097152, 00:14:47.446 "enable_recv_pipe": true, 00:14:47.446 "enable_quickack": false, 00:14:47.446 "enable_placement_id": 0, 00:14:47.446 "enable_zerocopy_send_server": false, 00:14:47.446 "enable_zerocopy_send_client": false, 00:14:47.446 "zerocopy_threshold": 0, 00:14:47.446 "tls_version": 0, 00:14:47.446 "enable_ktls": false 00:14:47.446 } 00:14:47.446 } 00:14:47.446 ] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "vmd", 00:14:47.446 "config": [] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "accel", 00:14:47.446 "config": [ 00:14:47.446 { 00:14:47.446 "method": "accel_set_options", 00:14:47.446 "params": { 00:14:47.446 "small_cache_size": 128, 00:14:47.446 "large_cache_size": 16, 00:14:47.446 "task_count": 2048, 00:14:47.446 "sequence_count": 2048, 00:14:47.446 "buf_count": 2048 00:14:47.446 } 00:14:47.446 } 00:14:47.446 ] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "bdev", 00:14:47.446 "config": [ 00:14:47.446 { 00:14:47.446 "method": "bdev_set_options", 00:14:47.446 "params": { 00:14:47.446 "bdev_io_pool_size": 65535, 00:14:47.446 "bdev_io_cache_size": 256, 00:14:47.446 "bdev_auto_examine": true, 00:14:47.446 "iobuf_small_cache_size": 128, 00:14:47.446 "iobuf_large_cache_size": 16 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "bdev_raid_set_options", 00:14:47.446 "params": { 00:14:47.446 "process_window_size_kb": 1024, 00:14:47.446 "process_max_bandwidth_mb_sec": 0 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "bdev_iscsi_set_options", 00:14:47.446 "params": { 00:14:47.446 "timeout_sec": 30 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "bdev_nvme_set_options", 00:14:47.446 "params": { 00:14:47.446 "action_on_timeout": "none", 00:14:47.446 "timeout_us": 0, 00:14:47.446 "timeout_admin_us": 0, 00:14:47.446 "keep_alive_timeout_ms": 10000, 00:14:47.446 "arbitration_burst": 0, 00:14:47.446 "low_priority_weight": 0, 00:14:47.446 "medium_priority_weight": 0, 00:14:47.446 "high_priority_weight": 0, 00:14:47.446 "nvme_adminq_poll_period_us": 10000, 00:14:47.446 "nvme_ioq_poll_period_us": 0, 00:14:47.446 "io_queue_requests": 0, 00:14:47.446 "delay_cmd_submit": true, 00:14:47.446 "transport_retry_count": 4, 00:14:47.446 "bdev_retry_count": 3, 00:14:47.446 "transport_ack_timeout": 0, 00:14:47.446 "ctrlr_loss_timeout_sec": 0, 00:14:47.446 "reconnect_delay_sec": 0, 00:14:47.446 "fast_io_fail_timeout_sec": 0, 00:14:47.446 "disable_auto_failback": false, 00:14:47.446 "generate_uuids": false, 00:14:47.446 "transport_tos": 0, 00:14:47.446 "nvme_error_stat": false, 00:14:47.446 "rdma_srq_size": 0, 00:14:47.446 "io_path_stat": false, 00:14:47.446 "allow_accel_sequence": false, 00:14:47.446 "rdma_max_cq_size": 0, 00:14:47.446 "rdma_cm_event_timeout_ms": 0, 00:14:47.446 "dhchap_digests": [ 00:14:47.446 "sha256", 00:14:47.446 "sha384", 00:14:47.446 "sha512" 00:14:47.446 ], 00:14:47.446 "dhchap_dhgroups": [ 00:14:47.446 "null", 00:14:47.446 "ffdhe2048", 00:14:47.446 "ffdhe3072", 00:14:47.446 "ffdhe4096", 00:14:47.446 "ffdhe6144", 00:14:47.446 "ffdhe8192" 00:14:47.446 ], 00:14:47.446 "rdma_umr_per_io": false 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "bdev_nvme_set_hotplug", 00:14:47.446 "params": { 00:14:47.446 "period_us": 100000, 00:14:47.446 "enable": false 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "bdev_malloc_create", 00:14:47.446 "params": { 00:14:47.446 "name": "malloc0", 00:14:47.446 "num_blocks": 8192, 00:14:47.446 "block_size": 4096, 00:14:47.446 "physical_block_size": 4096, 00:14:47.446 "uuid": "411072ac-bd88-41cc-b4a6-2f4a829fba34", 00:14:47.446 "optimal_io_boundary": 0, 00:14:47.446 "md_size": 0, 00:14:47.446 "dif_type": 0, 00:14:47.446 "dif_is_head_of_md": false, 00:14:47.446 "dif_pi_format": 0 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "bdev_wait_for_examine" 00:14:47.446 } 00:14:47.446 ] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "nbd", 00:14:47.446 "config": [] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "scheduler", 00:14:47.446 "config": [ 00:14:47.446 { 00:14:47.446 "method": "framework_set_scheduler", 00:14:47.446 "params": { 00:14:47.446 "name": "static" 00:14:47.446 } 00:14:47.446 } 00:14:47.446 ] 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "subsystem": "nvmf", 00:14:47.446 "config": [ 00:14:47.446 { 00:14:47.446 "method": "nvmf_set_config", 00:14:47.446 "params": { 00:14:47.446 "discovery_filter": "match_any", 00:14:47.446 "admin_cmd_passthru": { 00:14:47.446 "identify_ctrlr": false 00:14:47.446 }, 00:14:47.446 "dhchap_digests": [ 00:14:47.446 "sha256", 00:14:47.446 "sha384", 00:14:47.446 "sha512" 00:14:47.446 ], 00:14:47.446 "dhchap_dhgroups": [ 00:14:47.446 "null", 00:14:47.446 "ffdhe2048", 00:14:47.446 "ffdhe3072", 00:14:47.446 "ffdhe4096", 00:14:47.446 "ffdhe6144", 00:14:47.446 "ffdhe8192" 00:14:47.446 ] 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "nvmf_set_max_subsystems", 00:14:47.446 "params": { 00:14:47.446 "max_subsystems": 1024 00:14:47.446 } 00:14:47.446 }, 00:14:47.446 { 00:14:47.446 "method": "nvmf_set_crdt", 00:14:47.447 "params": { 00:14:47.447 "crdt1": 0, 00:14:47.447 "crdt2": 0, 00:14:47.447 "crdt3": 0 00:14:47.447 } 00:14:47.447 }, 00:14:47.447 { 00:14:47.447 "method": "nvmf_create_transport", 00:14:47.447 "params": { 00:14:47.447 "trtype": "TCP", 00:14:47.447 "max_queue_depth": 128, 00:14:47.447 "max_io_qpairs_per_ctrlr": 127, 00:14:47.447 "in_capsule_data_size": 4096, 00:14:47.447 "max_io_size": 131072, 00:14:47.447 "io_unit_size": 131072, 00:14:47.447 "max_aq_depth": 128, 00:14:47.447 "num_shared_buffers": 511, 00:14:47.447 "buf_cache_size": 4294967295, 00:14:47.447 "dif_insert_or_strip": false, 00:14:47.447 "zcopy": false, 00:14:47.447 "c2h_success": false, 00:14:47.447 "sock_priority": 0, 00:14:47.447 "abort_timeout_sec": 1, 00:14:47.447 "ack_timeout": 0, 00:14:47.447 "data_wr_pool_size": 0 00:14:47.447 } 00:14:47.447 }, 00:14:47.447 { 00:14:47.447 "method": "nvmf_create_subsystem", 00:14:47.447 "params": { 00:14:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.447 "allow_any_host": false, 00:14:47.447 "serial_number": "00000000000000000000", 00:14:47.447 "model_number": "SPDK bdev Controller", 00:14:47.447 "max_namespaces": 32, 00:14:47.447 "min_cntlid": 1, 00:14:47.447 "max_cntlid": 65519, 00:14:47.447 "ana_reporting": false 00:14:47.447 } 00:14:47.447 }, 00:14:47.447 { 00:14:47.447 "method": "nvmf_subsystem_add_host", 00:14:47.447 "params": { 00:14:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.447 "host": "nqn.2016-06.io.spdk:host1", 00:14:47.447 "psk": "key0" 00:14:47.447 } 00:14:47.447 }, 00:14:47.447 { 00:14:47.447 "method": "nvmf_subsystem_add_ns", 00:14:47.447 "params": { 00:14:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.447 "namespace": { 00:14:47.447 "nsid": 1, 00:14:47.447 "bdev_name": "malloc0", 00:14:47.447 "nguid": "411072ACBD8841CCB4A62F4A829FBA34", 00:14:47.447 "uuid": "411072ac-bd88-41cc-b4a6-2f4a829fba34", 00:14:47.447 "no_auto_visible": false 00:14:47.447 } 00:14:47.447 } 00:14:47.447 }, 00:14:47.447 { 00:14:47.447 "method": "nvmf_subsystem_add_listener", 00:14:47.447 "params": { 00:14:47.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.447 "listen_address": { 00:14:47.447 "trtype": "TCP", 00:14:47.447 "adrfam": "IPv4", 00:14:47.447 "traddr": "10.0.0.3", 00:14:47.447 "trsvcid": "4420" 00:14:47.447 }, 00:14:47.447 "secure_channel": false, 00:14:47.447 "sock_impl": "ssl" 00:14:47.447 } 00:14:47.447 } 00:14:47.447 ] 00:14:47.447 } 00:14:47.447 ] 00:14:47.447 }' 00:14:47.447 21:41:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:47.706 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:47.706 "subsystems": [ 00:14:47.706 { 00:14:47.706 "subsystem": "keyring", 00:14:47.706 "config": [ 00:14:47.706 { 00:14:47.706 "method": "keyring_file_add_key", 00:14:47.706 "params": { 00:14:47.706 "name": "key0", 00:14:47.706 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:47.706 } 00:14:47.706 } 00:14:47.706 ] 00:14:47.706 }, 00:14:47.706 { 00:14:47.706 "subsystem": "iobuf", 00:14:47.706 "config": [ 00:14:47.706 { 00:14:47.706 "method": "iobuf_set_options", 00:14:47.706 "params": { 00:14:47.706 "small_pool_count": 8192, 00:14:47.706 "large_pool_count": 1024, 00:14:47.706 "small_bufsize": 8192, 00:14:47.706 "large_bufsize": 135168, 00:14:47.706 "enable_numa": false 00:14:47.706 } 00:14:47.706 } 00:14:47.706 ] 00:14:47.706 }, 00:14:47.706 { 00:14:47.706 "subsystem": "sock", 00:14:47.706 "config": [ 00:14:47.706 { 00:14:47.706 "method": "sock_set_default_impl", 00:14:47.706 "params": { 00:14:47.706 "impl_name": "uring" 00:14:47.706 } 00:14:47.706 }, 00:14:47.706 { 00:14:47.706 "method": "sock_impl_set_options", 00:14:47.706 "params": { 00:14:47.706 "impl_name": "ssl", 00:14:47.706 "recv_buf_size": 4096, 00:14:47.706 "send_buf_size": 4096, 00:14:47.706 "enable_recv_pipe": true, 00:14:47.706 "enable_quickack": false, 00:14:47.706 "enable_placement_id": 0, 00:14:47.706 "enable_zerocopy_send_server": true, 00:14:47.706 "enable_zerocopy_send_client": false, 00:14:47.706 "zerocopy_threshold": 0, 00:14:47.706 "tls_version": 0, 00:14:47.706 "enable_ktls": false 00:14:47.706 } 00:14:47.706 }, 00:14:47.706 { 00:14:47.706 "method": "sock_impl_set_options", 00:14:47.706 "params": { 00:14:47.707 "impl_name": "posix", 00:14:47.707 "recv_buf_size": 2097152, 00:14:47.707 "send_buf_size": 2097152, 00:14:47.707 "enable_recv_pipe": true, 00:14:47.707 "enable_quickack": false, 00:14:47.707 "enable_placement_id": 0, 00:14:47.707 "enable_zerocopy_send_server": true, 00:14:47.707 "enable_zerocopy_send_client": false, 00:14:47.707 "zerocopy_threshold": 0, 00:14:47.707 "tls_version": 0, 00:14:47.707 "enable_ktls": false 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "sock_impl_set_options", 00:14:47.707 "params": { 00:14:47.707 "impl_name": "uring", 00:14:47.707 "recv_buf_size": 2097152, 00:14:47.707 "send_buf_size": 2097152, 00:14:47.707 "enable_recv_pipe": true, 00:14:47.707 "enable_quickack": false, 00:14:47.707 "enable_placement_id": 0, 00:14:47.707 "enable_zerocopy_send_server": false, 00:14:47.707 "enable_zerocopy_send_client": false, 00:14:47.707 "zerocopy_threshold": 0, 00:14:47.707 "tls_version": 0, 00:14:47.707 "enable_ktls": false 00:14:47.707 } 00:14:47.707 } 00:14:47.707 ] 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "subsystem": "vmd", 00:14:47.707 "config": [] 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "subsystem": "accel", 00:14:47.707 "config": [ 00:14:47.707 { 00:14:47.707 "method": "accel_set_options", 00:14:47.707 "params": { 00:14:47.707 "small_cache_size": 128, 00:14:47.707 "large_cache_size": 16, 00:14:47.707 "task_count": 2048, 00:14:47.707 "sequence_count": 2048, 00:14:47.707 "buf_count": 2048 00:14:47.707 } 00:14:47.707 } 00:14:47.707 ] 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "subsystem": "bdev", 00:14:47.707 "config": [ 00:14:47.707 { 00:14:47.707 "method": "bdev_set_options", 00:14:47.707 "params": { 00:14:47.707 "bdev_io_pool_size": 65535, 00:14:47.707 "bdev_io_cache_size": 256, 00:14:47.707 "bdev_auto_examine": true, 00:14:47.707 "iobuf_small_cache_size": 128, 00:14:47.707 "iobuf_large_cache_size": 16 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "bdev_raid_set_options", 00:14:47.707 "params": { 00:14:47.707 "process_window_size_kb": 1024, 00:14:47.707 "process_max_bandwidth_mb_sec": 0 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "bdev_iscsi_set_options", 00:14:47.707 "params": { 00:14:47.707 "timeout_sec": 30 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "bdev_nvme_set_options", 00:14:47.707 "params": { 00:14:47.707 "action_on_timeout": "none", 00:14:47.707 "timeout_us": 0, 00:14:47.707 "timeout_admin_us": 0, 00:14:47.707 "keep_alive_timeout_ms": 10000, 00:14:47.707 "arbitration_burst": 0, 00:14:47.707 "low_priority_weight": 0, 00:14:47.707 "medium_priority_weight": 0, 00:14:47.707 "high_priority_weight": 0, 00:14:47.707 "nvme_adminq_poll_period_us": 10000, 00:14:47.707 "nvme_ioq_poll_period_us": 0, 00:14:47.707 "io_queue_requests": 512, 00:14:47.707 "delay_cmd_submit": true, 00:14:47.707 "transport_retry_count": 4, 00:14:47.707 "bdev_retry_count": 3, 00:14:47.707 "transport_ack_timeout": 0, 00:14:47.707 "ctrlr_loss_timeout_sec": 0, 00:14:47.707 "reconnect_delay_sec": 0, 00:14:47.707 "fast_io_fail_timeout_sec": 0, 00:14:47.707 "disable_auto_failback": false, 00:14:47.707 "generate_uuids": false, 00:14:47.707 "transport_tos": 0, 00:14:47.707 "nvme_error_stat": false, 00:14:47.707 "rdma_srq_size": 0, 00:14:47.707 "io_path_stat": false, 00:14:47.707 "allow_accel_sequence": false, 00:14:47.707 "rdma_max_cq_size": 0, 00:14:47.707 "rdma_cm_event_timeout_ms": 0, 00:14:47.707 "dhchap_digests": [ 00:14:47.707 "sha256", 00:14:47.707 "sha384", 00:14:47.707 "sha512" 00:14:47.707 ], 00:14:47.707 "dhchap_dhgroups": [ 00:14:47.707 "null", 00:14:47.707 "ffdhe2048", 00:14:47.707 "ffdhe3072", 00:14:47.707 "ffdhe4096", 00:14:47.707 "ffdhe6144", 00:14:47.707 "ffdhe8192" 00:14:47.707 ], 00:14:47.707 "rdma_umr_per_io": false 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "bdev_nvme_attach_controller", 00:14:47.707 "params": { 00:14:47.707 "name": "nvme0", 00:14:47.707 "trtype": "TCP", 00:14:47.707 "adrfam": "IPv4", 00:14:47.707 "traddr": "10.0.0.3", 00:14:47.707 "trsvcid": "4420", 00:14:47.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.707 "prchk_reftag": false, 00:14:47.707 "prchk_guard": false, 00:14:47.707 "ctrlr_loss_timeout_sec": 0, 00:14:47.707 "reconnect_delay_sec": 0, 00:14:47.707 "fast_io_fail_timeout_sec": 0, 00:14:47.707 "psk": "key0", 00:14:47.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:47.707 "hdgst": false, 00:14:47.707 "ddgst": false, 00:14:47.707 "multipath": "multipath" 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "bdev_nvme_set_hotplug", 00:14:47.707 "params": { 00:14:47.707 "period_us": 100000, 00:14:47.707 "enable": false 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "bdev_enable_histogram", 00:14:47.707 "params": { 00:14:47.707 "name": "nvme0n1", 00:14:47.707 "enable": true 00:14:47.707 } 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "method": "bdev_wait_for_examine" 00:14:47.707 } 00:14:47.707 ] 00:14:47.707 }, 00:14:47.707 { 00:14:47.707 "subsystem": "nbd", 00:14:47.707 "config": [] 00:14:47.707 } 00:14:47.707 ] 00:14:47.707 }' 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72576 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72576 ']' 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72576 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72576 00:14:47.707 killing process with pid 72576 00:14:47.707 Received shutdown signal, test time was about 1.000000 seconds 00:14:47.707 00:14:47.707 Latency(us) 00:14:47.707 [2024-12-10T21:41:48.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.707 [2024-12-10T21:41:48.490Z] =================================================================================================================== 00:14:47.707 [2024-12-10T21:41:48.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72576' 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72576 00:14:47.707 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72576 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72551 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72551 ']' 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72551 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72551 00:14:47.966 killing process with pid 72551 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72551' 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72551 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72551 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.966 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:47.966 "subsystems": [ 00:14:47.966 { 00:14:47.966 "subsystem": "keyring", 00:14:47.966 "config": [ 00:14:47.966 { 00:14:47.966 "method": "keyring_file_add_key", 00:14:47.966 "params": { 00:14:47.966 "name": "key0", 00:14:47.966 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:47.966 } 00:14:47.966 } 00:14:47.966 ] 00:14:47.966 }, 00:14:47.966 { 00:14:47.966 "subsystem": "iobuf", 00:14:47.966 "config": [ 00:14:47.966 { 00:14:47.966 "method": "iobuf_set_options", 00:14:47.966 "params": { 00:14:47.966 "small_pool_count": 8192, 00:14:47.966 "large_pool_count": 1024, 00:14:47.966 "small_bufsize": 8192, 00:14:47.966 "large_bufsize": 135168, 00:14:47.966 "enable_numa": false 00:14:47.966 } 00:14:47.966 } 00:14:47.966 ] 00:14:47.966 }, 00:14:47.966 { 00:14:47.966 "subsystem": "sock", 00:14:47.966 "config": [ 00:14:47.966 { 00:14:47.966 "method": "sock_set_default_impl", 00:14:47.966 "params": { 00:14:47.966 "impl_name": "uring" 00:14:47.966 } 00:14:47.966 }, 00:14:47.966 { 00:14:47.966 "method": "sock_impl_set_options", 00:14:47.966 "params": { 00:14:47.966 "impl_name": "ssl", 00:14:47.966 "recv_buf_size": 4096, 00:14:47.966 "send_buf_size": 4096, 00:14:47.966 "enable_recv_pipe": true, 00:14:47.966 "enable_quickack": false, 00:14:47.966 "enable_placement_id": 0, 00:14:47.966 "enable_zerocopy_send_server": true, 00:14:47.966 "enable_zerocopy_send_client": false, 00:14:47.966 "zerocopy_threshold": 0, 00:14:47.967 "tls_version": 0, 00:14:47.967 "enable_ktls": false 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "sock_impl_set_options", 00:14:47.967 "params": { 00:14:47.967 "impl_name": "posix", 00:14:47.967 "recv_buf_size": 2097152, 00:14:47.967 "send_buf_size": 2097152, 00:14:47.967 "enable_recv_pipe": true, 00:14:47.967 "enable_quickack": false, 00:14:47.967 "enable_placement_id": 0, 00:14:47.967 "enable_zerocopy_send_server": true, 00:14:47.967 "enable_zerocopy_send_client": false, 00:14:47.967 "zerocopy_threshold": 0, 00:14:47.967 "tls_version": 0, 00:14:47.967 "enable_ktls": false 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "sock_impl_set_options", 00:14:47.967 "params": { 00:14:47.967 "impl_name": "uring", 00:14:47.967 "recv_buf_size": 2097152, 00:14:47.967 "send_buf_size": 2097152, 00:14:47.967 "enable_recv_pipe": true, 00:14:47.967 "enable_quickack": false, 00:14:47.967 "enable_placement_id": 0, 00:14:47.967 "enable_zerocopy_send_server": false, 00:14:47.967 "enable_zerocopy_send_client": false, 00:14:47.967 "zerocopy_threshold": 0, 00:14:47.967 "tls_version": 0, 00:14:47.967 "enable_ktls": false 00:14:47.967 } 00:14:47.967 } 00:14:47.967 ] 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "subsystem": "vmd", 00:14:47.967 "config": [] 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "subsystem": "accel", 00:14:47.967 "config": [ 00:14:47.967 { 00:14:47.967 "method": "accel_set_options", 00:14:47.967 "params": { 00:14:47.967 "small_cache_size": 128, 00:14:47.967 "large_cache_size": 16, 00:14:47.967 "task_count": 2048, 00:14:47.967 "sequence_count": 2048, 00:14:47.967 "buf_count": 2048 00:14:47.967 } 00:14:47.967 } 00:14:47.967 ] 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "subsystem": "bdev", 00:14:47.967 "config": [ 00:14:47.967 { 00:14:47.967 "method": "bdev_set_options", 00:14:47.967 "params": { 00:14:47.967 "bdev_io_pool_size": 65535, 00:14:47.967 "bdev_io_cache_size": 256, 00:14:47.967 "bdev_auto_examine": true, 00:14:47.967 "iobuf_small_cache_size": 128, 00:14:47.967 "iobuf_large_cache_size": 16 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "bdev_raid_set_options", 00:14:47.967 "params": { 00:14:47.967 "process_window_size_kb": 1024, 00:14:47.967 "process_max_bandwidth_mb_sec": 0 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "bdev_iscsi_set_options", 00:14:47.967 "params": { 00:14:47.967 "timeout_sec": 30 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "bdev_nvme_set_options", 00:14:47.967 "params": { 00:14:47.967 "action_on_timeout": "none", 00:14:47.967 "timeout_us": 0, 00:14:47.967 "timeout_admin_us": 0, 00:14:47.967 "keep_alive_timeout_ms": 10000, 00:14:47.967 "arbitration_burst": 0, 00:14:47.967 "low_priority_weight": 0, 00:14:47.967 "medium_priority_weight": 0, 00:14:47.967 "high_priority_weight": 0, 00:14:47.967 "nvme_adminq_poll_period_us": 10000, 00:14:47.967 "nvme_ioq_poll_period_us": 0, 00:14:47.967 "io_queue_requests": 0, 00:14:47.967 "delay_cmd_submit": true, 00:14:47.967 "transport_retry_count": 4, 00:14:47.967 "bdev_retry_count": 3, 00:14:47.967 "transport_ack_timeout": 0, 00:14:47.967 "ctrlr_loss_timeout_sec": 0, 00:14:47.967 "reconnect_delay_sec": 0, 00:14:47.967 "fast_io_fail_timeout_sec": 0, 00:14:47.967 "disable_auto_failback": false, 00:14:47.967 "generate_uuids": false, 00:14:47.967 "transport_tos": 0, 00:14:47.967 "nvme_error_stat": false, 00:14:47.967 "rdma_srq_size": 0, 00:14:47.967 "io_path_stat": false, 00:14:47.967 "allow_accel_sequence": false, 00:14:47.967 "rdma_max_cq_size": 0, 00:14:47.967 "rdma_cm_event_timeout_ms": 0, 00:14:47.967 "dhchap_digests": [ 00:14:47.967 "sha256", 00:14:47.967 "sha384", 00:14:47.967 "sha512" 00:14:47.967 ], 00:14:47.967 "dhchap_dhgroups": [ 00:14:47.967 "null", 00:14:47.967 "ffdhe2048", 00:14:47.967 "ffdhe3072", 00:14:47.967 "ffdhe4096", 00:14:47.967 "ffdhe6144", 00:14:47.967 "ffdhe8192" 00:14:47.967 ], 00:14:47.967 "rdma_umr_per_io": false 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "bdev_nvme_set_hotplug", 00:14:47.967 "params": { 00:14:47.967 "period_us": 100000, 00:14:47.967 "enable": false 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "bdev_malloc_create", 00:14:47.967 "params": { 00:14:47.967 "name": "malloc0", 00:14:47.967 "num_blocks": 8192, 00:14:47.967 "block_size": 4096, 00:14:47.967 "physical_block_size": 4096, 00:14:47.967 "uuid": "411072ac-bd88-41cc-b4a6-2f4a829fba34", 00:14:47.967 "optimal_io_boundary": 0, 00:14:47.967 "md_size": 0, 00:14:47.967 "dif_type": 0, 00:14:47.967 "dif_is_head_of_md": false, 00:14:47.967 "dif_pi_format": 0 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "bdev_wait_for_examine" 00:14:47.967 } 00:14:47.967 ] 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "subsystem": "nbd", 00:14:47.967 "config": [] 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "subsystem": "scheduler", 00:14:47.967 "config": [ 00:14:47.967 { 00:14:47.967 "method": "framework_set_scheduler", 00:14:47.967 "params": { 00:14:47.967 "name": "static" 00:14:47.967 } 00:14:47.967 } 00:14:47.967 ] 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "subsystem": "nvmf", 00:14:47.967 "config": [ 00:14:47.967 { 00:14:47.967 "method": "nvmf_set_config", 00:14:47.967 "params": { 00:14:47.967 "discovery_filter": "match_any", 00:14:47.967 "admin_cmd_passthru": { 00:14:47.967 "identify_ctrlr": false 00:14:47.967 }, 00:14:47.967 "dhchap_digests": [ 00:14:47.967 "sha256", 00:14:47.967 "sha384", 00:14:47.967 "sha512" 00:14:47.967 ], 00:14:47.967 "dhchap_dhgroups": [ 00:14:47.967 "null", 00:14:47.967 "ffdhe2048", 00:14:47.967 "ffdhe3072", 00:14:47.967 "ffdhe4096", 00:14:47.967 "ffdhe6144", 00:14:47.967 "ffdhe8192" 00:14:47.967 ] 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "nvmf_set_max_subsystems", 00:14:47.967 "params": { 00:14:47.967 "max_subsystems": 1024 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "nvmf_set_crdt", 00:14:47.967 "params": { 00:14:47.967 "crdt1": 0, 00:14:47.967 "crdt2": 0, 00:14:47.967 "crdt3": 0 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "nvmf_create_transport", 00:14:47.967 "params": { 00:14:47.967 "trtype": "TCP", 00:14:47.967 "max_queue_depth": 128, 00:14:47.967 "max_io_qpairs_per_ctrlr": 127, 00:14:47.967 "in_capsule_data_size": 4096, 00:14:47.967 "max_io_size": 131072, 00:14:47.967 "io_unit_size": 131072, 00:14:47.967 "max_aq_depth": 128, 00:14:47.967 "num_shared_buffers": 511, 00:14:47.967 "buf_cache_size": 4294967295, 00:14:47.967 "dif_insert_or_strip": false, 00:14:47.967 "zcopy": false, 00:14:47.967 "c2h_success": false, 00:14:47.967 "sock_priority": 0, 00:14:47.967 "abort_timeout_sec": 1, 00:14:47.967 "ack_timeout": 0, 00:14:47.967 "data_wr_pool_size": 0 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "nvmf_create_subsystem", 00:14:47.967 "params": { 00:14:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.967 "allow_any_host": false, 00:14:47.967 "serial_number": "00000000000000000000", 00:14:47.967 "model_number": "SPDK bdev Controller", 00:14:47.967 "max_namespaces": 32, 00:14:47.967 "min_cntlid": 1, 00:14:47.967 "max_cntlid": 65519, 00:14:47.967 "ana_reporting": false 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "nvmf_subsystem_add_host", 00:14:47.967 "params": { 00:14:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.967 "host": "nqn.2016-06.io.spdk:host1", 00:14:47.967 "psk": "key0" 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "nvmf_subsystem_add_ns", 00:14:47.967 "params": { 00:14:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.967 "namespace": { 00:14:47.967 "nsid": 1, 00:14:47.967 "bdev_name": "malloc0", 00:14:47.967 "nguid": "411072ACBD8841CCB4A62F4A829FBA34", 00:14:47.967 "uuid": "411072ac-bd88-41cc-b4a6-2f4a829fba34", 00:14:47.967 "no_auto_visible": false 00:14:47.967 } 00:14:47.967 } 00:14:47.967 }, 00:14:47.967 { 00:14:47.967 "method": "nvmf_subsystem_add_listener", 00:14:47.967 "params": { 00:14:47.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.967 "listen_address": { 00:14:47.967 "trtype": "TCP", 00:14:47.967 "adrfam": "IPv4", 00:14:47.967 "traddr": "10.0.0.3", 00:14:47.967 "trsvcid": "4420" 00:14:47.967 }, 00:14:47.967 "secure_channel": false, 00:14:47.967 "sock_impl": "ssl" 00:14:47.967 } 00:14:47.967 } 00:14:47.967 ] 00:14:47.967 } 00:14:47.967 ] 00:14:47.967 }' 00:14:47.967 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.967 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:47.967 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72628 00:14:47.967 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72628 00:14:47.967 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72628 ']' 00:14:47.967 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.968 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.968 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.968 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.968 21:41:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.226 [2024-12-10 21:41:48.756630] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:48.226 [2024-12-10 21:41:48.756729] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.226 [2024-12-10 21:41:48.899085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.226 [2024-12-10 21:41:48.931321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.226 [2024-12-10 21:41:48.931384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.226 [2024-12-10 21:41:48.931396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.226 [2024-12-10 21:41:48.931404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.226 [2024-12-10 21:41:48.931411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.226 [2024-12-10 21:41:48.931792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.484 [2024-12-10 21:41:49.075977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.484 [2024-12-10 21:41:49.136177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.484 [2024-12-10 21:41:49.168149] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:48.484 [2024-12-10 21:41:49.168433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72661 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72661 /var/tmp/bdevperf.sock 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72661 ']' 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.051 21:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:49.051 "subsystems": [ 00:14:49.051 { 00:14:49.051 "subsystem": "keyring", 00:14:49.051 "config": [ 00:14:49.051 { 00:14:49.051 "method": "keyring_file_add_key", 00:14:49.051 "params": { 00:14:49.051 "name": "key0", 00:14:49.051 "path": "/tmp/tmp.MtgUcoG7Uo" 00:14:49.051 } 00:14:49.051 } 00:14:49.051 ] 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "subsystem": "iobuf", 00:14:49.051 "config": [ 00:14:49.051 { 00:14:49.051 "method": "iobuf_set_options", 00:14:49.051 "params": { 00:14:49.051 "small_pool_count": 8192, 00:14:49.051 "large_pool_count": 1024, 00:14:49.051 "small_bufsize": 8192, 00:14:49.051 "large_bufsize": 135168, 00:14:49.051 "enable_numa": false 00:14:49.051 } 00:14:49.051 } 00:14:49.051 ] 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "subsystem": "sock", 00:14:49.051 "config": [ 00:14:49.051 { 00:14:49.051 "method": "sock_set_default_impl", 00:14:49.051 "params": { 00:14:49.051 "impl_name": "uring" 00:14:49.051 } 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "method": "sock_impl_set_options", 00:14:49.051 "params": { 00:14:49.051 "impl_name": "ssl", 00:14:49.051 "recv_buf_size": 4096, 00:14:49.051 "send_buf_size": 4096, 00:14:49.051 "enable_recv_pipe": true, 00:14:49.051 "enable_quickack": false, 00:14:49.051 "enable_placement_id": 0, 00:14:49.051 "enable_zerocopy_send_server": true, 00:14:49.051 "enable_zerocopy_send_client": false, 00:14:49.051 "zerocopy_threshold": 0, 00:14:49.051 "tls_version": 0, 00:14:49.051 "enable_ktls": false 00:14:49.051 } 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "method": "sock_impl_set_options", 00:14:49.051 "params": { 00:14:49.051 "impl_name": "posix", 00:14:49.051 "recv_buf_size": 2097152, 00:14:49.051 "send_buf_size": 2097152, 00:14:49.051 "enable_recv_pipe": true, 00:14:49.051 "enable_quickack": false, 00:14:49.051 "enable_placement_id": 0, 00:14:49.051 "enable_zerocopy_send_server": true, 00:14:49.051 "enable_zerocopy_send_client": false, 00:14:49.051 "zerocopy_threshold": 0, 00:14:49.051 "tls_version": 0, 00:14:49.051 "enable_ktls": false 00:14:49.051 } 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "method": "sock_impl_set_options", 00:14:49.051 "params": { 00:14:49.051 "impl_name": "uring", 00:14:49.051 "recv_buf_size": 2097152, 00:14:49.051 "send_buf_size": 2097152, 00:14:49.051 "enable_recv_pipe": true, 00:14:49.051 "enable_quickack": false, 00:14:49.051 "enable_placement_id": 0, 00:14:49.051 "enable_zerocopy_send_server": false, 00:14:49.051 "enable_zerocopy_send_client": false, 00:14:49.051 "zerocopy_threshold": 0, 00:14:49.051 "tls_version": 0, 00:14:49.051 "enable_ktls": false 00:14:49.051 } 00:14:49.051 } 00:14:49.051 ] 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "subsystem": "vmd", 00:14:49.051 "config": [] 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "subsystem": "accel", 00:14:49.051 "config": [ 00:14:49.051 { 00:14:49.051 "method": "accel_set_options", 00:14:49.051 "params": { 00:14:49.051 "small_cache_size": 128, 00:14:49.051 "large_cache_size": 16, 00:14:49.051 "task_count": 2048, 00:14:49.051 "sequence_count": 2048, 00:14:49.051 "buf_count": 2048 00:14:49.051 } 00:14:49.051 } 00:14:49.051 ] 00:14:49.051 }, 00:14:49.051 { 00:14:49.051 "subsystem": "bdev", 00:14:49.051 "config": [ 00:14:49.051 { 00:14:49.051 "method": "bdev_set_options", 00:14:49.051 "params": { 00:14:49.051 "bdev_io_pool_size": 65535, 00:14:49.051 "bdev_io_cache_size": 256, 00:14:49.052 "bdev_auto_examine": true, 00:14:49.052 "iobuf_small_cache_size": 128, 00:14:49.052 "iobuf_large_cache_size": 16 00:14:49.052 } 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "method": "bdev_raid_set_options", 00:14:49.052 "params": { 00:14:49.052 "process_window_size_kb": 1024, 00:14:49.052 "process_max_bandwidth_mb_sec": 0 00:14:49.052 } 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "method": "bdev_iscsi_set_options", 00:14:49.052 "params": { 00:14:49.052 "timeout_sec": 30 00:14:49.052 } 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "method": "bdev_nvme_set_options", 00:14:49.052 "params": { 00:14:49.052 "action_on_timeout": "none", 00:14:49.052 "timeout_us": 0, 00:14:49.052 "timeout_admin_us": 0, 00:14:49.052 "keep_alive_timeout_ms": 10000, 00:14:49.052 "arbitration_burst": 0, 00:14:49.052 "low_priority_weight": 0, 00:14:49.052 "medium_priority_weight": 0, 00:14:49.052 "high_priority_weight": 0, 00:14:49.052 "nvme_adminq_poll_period_us": 10000, 00:14:49.052 "nvme_ioq_poll_period_us": 0, 00:14:49.052 "io_queue_requests": 512, 00:14:49.052 "delay_cmd_submit": true, 00:14:49.052 "transport_retry_count": 4, 00:14:49.052 "bdev_retry_count": 3, 00:14:49.052 "transport_ack_timeout": 0, 00:14:49.052 "ctrlr_loss_timeout_sec": 0, 00:14:49.052 "reconnect_delay_sec": 0, 00:14:49.052 "fast_io_fail_timeout_sec": 0, 00:14:49.052 "disable_auto_failback": false, 00:14:49.052 "generate_uuids": false, 00:14:49.052 "transport_tos": 0, 00:14:49.052 "nvme_error_stat": false, 00:14:49.052 "rdma_srq_size": 0, 00:14:49.052 "io_path_stat": false, 00:14:49.052 "allow_accel_sequence": false, 00:14:49.052 "rdma_max_cq_size": 0, 00:14:49.052 "rdma_cm_event_timeout_ms": 0, 00:14:49.052 "dhchap_digests": [ 00:14:49.052 "sha256", 00:14:49.052 "sha384", 00:14:49.052 "sha512" 00:14:49.052 ], 00:14:49.052 "dhchap_dhgroups": [ 00:14:49.052 "null", 00:14:49.052 "ffdhe2048", 00:14:49.052 "ffdhe3072", 00:14:49.052 "ffdhe4096", 00:14:49.052 "ffdhe6144", 00:14:49.052 "ffdhe8192" 00:14:49.052 ], 00:14:49.052 "rdma_umr_per_io": false 00:14:49.052 } 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "method": "bdev_nvme_attach_controller", 00:14:49.052 "params": { 00:14:49.052 "name": "nvme0", 00:14:49.052 "trtype": "TCP", 00:14:49.052 "adrfam": "IPv4", 00:14:49.052 "traddr": "10.0.0.3", 00:14:49.052 "trsvcid": "4420", 00:14:49.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.052 "prchk_reftag": false, 00:14:49.052 "prchk_guard": false, 00:14:49.052 "ctrlr_loss_timeout_sec": 0, 00:14:49.052 "reconnect_delay_sec": 0, 00:14:49.052 "fast_io_fail_timeout_sec": 0, 00:14:49.052 "psk": "key0", 00:14:49.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.052 "hdgst": false, 00:14:49.052 "ddgst": false, 00:14:49.052 "multipath": "multipath" 00:14:49.052 } 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "method": "bdev_nvme_set_hotplug", 00:14:49.052 "params": { 00:14:49.052 "period_us": 100000, 00:14:49.052 "enable": false 00:14:49.052 } 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "method": "bdev_enable_histogram", 00:14:49.052 "params": { 00:14:49.052 "name": "nvme0n1", 00:14:49.052 "enable": true 00:14:49.052 } 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "method": "bdev_wait_for_examine" 00:14:49.052 } 00:14:49.052 ] 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "subsystem": "nbd", 00:14:49.052 "config": [] 00:14:49.052 } 00:14:49.052 ] 00:14:49.052 }' 00:14:49.052 [2024-12-10 21:41:49.831951] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:49.311 [2024-12-10 21:41:49.832474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72661 ] 00:14:49.311 [2024-12-10 21:41:49.979868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.311 [2024-12-10 21:41:50.014414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.569 [2024-12-10 21:41:50.127071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.569 [2024-12-10 21:41:50.161027] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.136 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.136 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:50.136 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:50.136 21:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:50.393 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.393 21:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:50.651 Running I/O for 1 seconds... 00:14:51.585 3720.00 IOPS, 14.53 MiB/s 00:14:51.585 Latency(us) 00:14:51.585 [2024-12-10T21:41:52.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.585 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:51.585 Verification LBA range: start 0x0 length 0x2000 00:14:51.585 nvme0n1 : 1.02 3782.02 14.77 0.00 0.00 33509.14 5600.35 32648.84 00:14:51.585 [2024-12-10T21:41:52.368Z] =================================================================================================================== 00:14:51.585 [2024-12-10T21:41:52.368Z] Total : 3782.02 14.77 0.00 0.00 33509.14 5600.35 32648.84 00:14:51.585 { 00:14:51.585 "results": [ 00:14:51.585 { 00:14:51.585 "job": "nvme0n1", 00:14:51.585 "core_mask": "0x2", 00:14:51.585 "workload": "verify", 00:14:51.585 "status": "finished", 00:14:51.585 "verify_range": { 00:14:51.585 "start": 0, 00:14:51.585 "length": 8192 00:14:51.585 }, 00:14:51.585 "queue_depth": 128, 00:14:51.585 "io_size": 4096, 00:14:51.585 "runtime": 1.01771, 00:14:51.585 "iops": 3782.0204183903074, 00:14:51.585 "mibps": 14.773517259337138, 00:14:51.585 "io_failed": 0, 00:14:51.585 "io_timeout": 0, 00:14:51.585 "avg_latency_us": 33509.14004393113, 00:14:51.585 "min_latency_us": 5600.349090909091, 00:14:51.585 "max_latency_us": 32648.843636363636 00:14:51.585 } 00:14:51.585 ], 00:14:51.585 "core_count": 1 00:14:51.585 } 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:51.585 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:51.585 nvmf_trace.0 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72661 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72661 ']' 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72661 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72661 00:14:51.843 killing process with pid 72661 00:14:51.843 Received shutdown signal, test time was about 1.000000 seconds 00:14:51.843 00:14:51.843 Latency(us) 00:14:51.843 [2024-12-10T21:41:52.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.843 [2024-12-10T21:41:52.626Z] =================================================================================================================== 00:14:51.843 [2024-12-10T21:41:52.626Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72661' 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72661 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72661 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.843 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.101 rmmod nvme_tcp 00:14:52.101 rmmod nvme_fabrics 00:14:52.101 rmmod nvme_keyring 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72628 ']' 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72628 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72628 ']' 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72628 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72628 00:14:52.101 killing process with pid 72628 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72628' 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72628 00:14:52.101 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72628 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:52.359 21:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.359 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.359 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.359 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.359 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.359 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.360 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.360 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.360 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.360 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:52.360 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.iMHNnv7nNn /tmp/tmp.eJoHchHq21 /tmp/tmp.MtgUcoG7Uo 00:14:52.360 00:14:52.360 real 1m22.076s 00:14:52.360 user 2m16.465s 00:14:52.360 sys 0m25.736s 00:14:52.360 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.360 ************************************ 00:14:52.360 END TEST nvmf_tls 00:14:52.360 ************************************ 00:14:52.360 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.619 ************************************ 00:14:52.619 START TEST nvmf_fips 00:14:52.619 ************************************ 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:52.619 * Looking for test storage... 00:14:52.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.619 --rc genhtml_branch_coverage=1 00:14:52.619 --rc genhtml_function_coverage=1 00:14:52.619 --rc genhtml_legend=1 00:14:52.619 --rc geninfo_all_blocks=1 00:14:52.619 --rc geninfo_unexecuted_blocks=1 00:14:52.619 00:14:52.619 ' 00:14:52.619 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.619 --rc genhtml_branch_coverage=1 00:14:52.619 --rc genhtml_function_coverage=1 00:14:52.619 --rc genhtml_legend=1 00:14:52.619 --rc geninfo_all_blocks=1 00:14:52.619 --rc geninfo_unexecuted_blocks=1 00:14:52.619 00:14:52.620 ' 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:52.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.620 --rc genhtml_branch_coverage=1 00:14:52.620 --rc genhtml_function_coverage=1 00:14:52.620 --rc genhtml_legend=1 00:14:52.620 --rc geninfo_all_blocks=1 00:14:52.620 --rc geninfo_unexecuted_blocks=1 00:14:52.620 00:14:52.620 ' 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:52.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.620 --rc genhtml_branch_coverage=1 00:14:52.620 --rc genhtml_function_coverage=1 00:14:52.620 --rc genhtml_legend=1 00:14:52.620 --rc geninfo_all_blocks=1 00:14:52.620 --rc geninfo_unexecuted_blocks=1 00:14:52.620 00:14:52.620 ' 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:52.620 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:52.889 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:52.890 Error setting digest 00:14:52.890 40E2FBB9657F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:52.890 40E2FBB9657F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.890 Cannot find device "nvmf_init_br" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.890 Cannot find device "nvmf_init_br2" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.890 Cannot find device "nvmf_tgt_br" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.890 Cannot find device "nvmf_tgt_br2" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.890 Cannot find device "nvmf_init_br" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.890 Cannot find device "nvmf_init_br2" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.890 Cannot find device "nvmf_tgt_br" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.890 Cannot find device "nvmf_tgt_br2" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.890 Cannot find device "nvmf_br" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.890 Cannot find device "nvmf_init_if" 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:52.890 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.890 Cannot find device "nvmf_init_if2" 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.148 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:53.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:53.407 00:14:53.407 --- 10.0.0.3 ping statistics --- 00:14:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.407 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:53.407 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:53.407 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:14:53.407 00:14:53.407 --- 10.0.0.4 ping statistics --- 00:14:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.407 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:53.407 00:14:53.407 --- 10.0.0.1 ping statistics --- 00:14:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.407 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:53.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:14:53.407 00:14:53.407 --- 10.0.0.2 ping statistics --- 00:14:53.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.407 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72972 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72972 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72972 ']' 00:14:53.407 21:41:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.407 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.407 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.407 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.407 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:53.407 [2024-12-10 21:41:54.075241] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:53.407 [2024-12-10 21:41:54.075344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.665 [2024-12-10 21:41:54.250353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.665 [2024-12-10 21:41:54.281941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.665 [2024-12-10 21:41:54.282001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.665 [2024-12-10 21:41:54.282013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.665 [2024-12-10 21:41:54.282021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.665 [2024-12-10 21:41:54.282028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.665 [2024-12-10 21:41:54.282339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.665 [2024-12-10 21:41:54.312106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.VMh 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.VMh 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.VMh 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.VMh 00:14:53.665 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.924 [2024-12-10 21:41:54.647906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.924 [2024-12-10 21:41:54.663869] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:53.924 [2024-12-10 21:41:54.664129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:53.924 malloc0 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73000 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73000 /var/tmp/bdevperf.sock 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73000 ']' 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.183 21:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:54.183 [2024-12-10 21:41:54.799869] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:14:54.183 [2024-12-10 21:41:54.799959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73000 ] 00:14:54.183 [2024-12-10 21:41:54.942418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.446 [2024-12-10 21:41:54.987018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.446 [2024-12-10 21:41:55.018872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.446 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.446 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:54.446 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.VMh 00:14:54.705 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:54.963 [2024-12-10 21:41:55.615702] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.963 TLSTESTn1 00:14:54.963 21:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:55.221 Running I/O for 10 seconds... 00:14:57.087 3837.00 IOPS, 14.99 MiB/s [2024-12-10T21:41:59.244Z] 3775.50 IOPS, 14.75 MiB/s [2024-12-10T21:42:00.179Z] 3790.67 IOPS, 14.81 MiB/s [2024-12-10T21:42:01.113Z] 3732.75 IOPS, 14.58 MiB/s [2024-12-10T21:42:02.054Z] 3734.80 IOPS, 14.59 MiB/s [2024-12-10T21:42:02.988Z] 3757.00 IOPS, 14.68 MiB/s [2024-12-10T21:42:03.922Z] 3774.71 IOPS, 14.74 MiB/s [2024-12-10T21:42:04.855Z] 3786.00 IOPS, 14.79 MiB/s [2024-12-10T21:42:06.229Z] 3798.56 IOPS, 14.84 MiB/s [2024-12-10T21:42:06.229Z] 3803.10 IOPS, 14.86 MiB/s 00:15:05.446 Latency(us) 00:15:05.446 [2024-12-10T21:42:06.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.446 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:05.446 Verification LBA range: start 0x0 length 0x2000 00:15:05.446 TLSTESTn1 : 10.02 3809.50 14.88 0.00 0.00 33541.19 4736.47 45994.36 00:15:05.446 [2024-12-10T21:42:06.229Z] =================================================================================================================== 00:15:05.446 [2024-12-10T21:42:06.229Z] Total : 3809.50 14.88 0.00 0.00 33541.19 4736.47 45994.36 00:15:05.446 { 00:15:05.446 "results": [ 00:15:05.446 { 00:15:05.446 "job": "TLSTESTn1", 00:15:05.446 "core_mask": "0x4", 00:15:05.446 "workload": "verify", 00:15:05.446 "status": "finished", 00:15:05.446 "verify_range": { 00:15:05.446 "start": 0, 00:15:05.446 "length": 8192 00:15:05.446 }, 00:15:05.446 "queue_depth": 128, 00:15:05.446 "io_size": 4096, 00:15:05.446 "runtime": 10.016529, 00:15:05.446 "iops": 3809.5032720416425, 00:15:05.446 "mibps": 14.880872156412666, 00:15:05.446 "io_failed": 0, 00:15:05.446 "io_timeout": 0, 00:15:05.446 "avg_latency_us": 33541.19155396938, 00:15:05.446 "min_latency_us": 4736.465454545454, 00:15:05.446 "max_latency_us": 45994.35636363636 00:15:05.446 } 00:15:05.446 ], 00:15:05.446 "core_count": 1 00:15:05.446 } 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:05.446 nvmf_trace.0 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73000 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73000 ']' 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73000 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73000 00:15:05.446 killing process with pid 73000 00:15:05.446 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.446 00:15:05.446 Latency(us) 00:15:05.446 [2024-12-10T21:42:06.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.446 [2024-12-10T21:42:06.229Z] =================================================================================================================== 00:15:05.446 [2024-12-10T21:42:06.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73000' 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73000 00:15:05.446 21:42:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73000 00:15:05.446 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:05.446 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:05.446 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:05.447 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:05.447 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:05.447 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:05.447 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:05.447 rmmod nvme_tcp 00:15:05.447 rmmod nvme_fabrics 00:15:05.447 rmmod nvme_keyring 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72972 ']' 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72972 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72972 ']' 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72972 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72972 00:15:05.705 killing process with pid 72972 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72972' 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72972 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72972 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:05.705 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:05.964 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.VMh 00:15:05.964 00:15:05.965 real 0m13.460s 00:15:05.965 user 0m18.429s 00:15:05.965 sys 0m5.549s 00:15:05.965 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.965 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:05.965 ************************************ 00:15:05.965 END TEST nvmf_fips 00:15:05.965 ************************************ 00:15:05.965 21:42:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:05.965 21:42:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:05.965 21:42:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.965 21:42:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:05.965 ************************************ 00:15:05.965 START TEST nvmf_control_msg_list 00:15:05.965 ************************************ 00:15:05.965 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:06.223 * Looking for test storage... 00:15:06.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.224 --rc genhtml_branch_coverage=1 00:15:06.224 --rc genhtml_function_coverage=1 00:15:06.224 --rc genhtml_legend=1 00:15:06.224 --rc geninfo_all_blocks=1 00:15:06.224 --rc geninfo_unexecuted_blocks=1 00:15:06.224 00:15:06.224 ' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.224 --rc genhtml_branch_coverage=1 00:15:06.224 --rc genhtml_function_coverage=1 00:15:06.224 --rc genhtml_legend=1 00:15:06.224 --rc geninfo_all_blocks=1 00:15:06.224 --rc geninfo_unexecuted_blocks=1 00:15:06.224 00:15:06.224 ' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.224 --rc genhtml_branch_coverage=1 00:15:06.224 --rc genhtml_function_coverage=1 00:15:06.224 --rc genhtml_legend=1 00:15:06.224 --rc geninfo_all_blocks=1 00:15:06.224 --rc geninfo_unexecuted_blocks=1 00:15:06.224 00:15:06.224 ' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.224 --rc genhtml_branch_coverage=1 00:15:06.224 --rc genhtml_function_coverage=1 00:15:06.224 --rc genhtml_legend=1 00:15:06.224 --rc geninfo_all_blocks=1 00:15:06.224 --rc geninfo_unexecuted_blocks=1 00:15:06.224 00:15:06.224 ' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.224 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:06.224 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:06.225 Cannot find device "nvmf_init_br" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:06.225 Cannot find device "nvmf_init_br2" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:06.225 Cannot find device "nvmf_tgt_br" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.225 Cannot find device "nvmf_tgt_br2" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:06.225 Cannot find device "nvmf_init_br" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:06.225 Cannot find device "nvmf_init_br2" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:06.225 Cannot find device "nvmf_tgt_br" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:06.225 Cannot find device "nvmf_tgt_br2" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:06.225 Cannot find device "nvmf_br" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:06.225 Cannot find device "nvmf_init_if" 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:06.225 21:42:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:06.483 Cannot find device "nvmf_init_if2" 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:06.483 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:06.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:06.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:15:06.484 00:15:06.484 --- 10.0.0.3 ping statistics --- 00:15:06.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.484 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:06.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:06.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:15:06.484 00:15:06.484 --- 10.0.0.4 ping statistics --- 00:15:06.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.484 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:06.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:06.484 00:15:06.484 --- 10.0.0.1 ping statistics --- 00:15:06.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.484 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:06.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:06.484 00:15:06.484 --- 10.0.0.2 ping statistics --- 00:15:06.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.484 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:06.484 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:06.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73383 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73383 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73383 ']' 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.742 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:06.742 [2024-12-10 21:42:07.361579] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:06.742 [2024-12-10 21:42:07.361675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.742 [2024-12-10 21:42:07.517853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.001 [2024-12-10 21:42:07.567608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.001 [2024-12-10 21:42:07.567666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.001 [2024-12-10 21:42:07.567678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.001 [2024-12-10 21:42:07.567687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.001 [2024-12-10 21:42:07.567694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.001 [2024-12-10 21:42:07.568000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.001 [2024-12-10 21:42:07.598622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:07.001 [2024-12-10 21:42:07.683165] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:07.001 Malloc0 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:07.001 [2024-12-10 21:42:07.717996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.001 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73402 00:15:07.002 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:07.002 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73403 00:15:07.002 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:07.002 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73404 00:15:07.002 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:07.002 21:42:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73402 00:15:07.294 [2024-12-10 21:42:07.896496] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:07.294 [2024-12-10 21:42:07.896710] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:07.294 [2024-12-10 21:42:07.906523] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:08.259 Initializing NVMe Controllers 00:15:08.259 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:08.259 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:08.259 Initialization complete. Launching workers. 00:15:08.259 ======================================================== 00:15:08.259 Latency(us) 00:15:08.259 Device Information : IOPS MiB/s Average min max 00:15:08.259 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2889.98 11.29 345.50 201.09 668.90 00:15:08.259 ======================================================== 00:15:08.259 Total : 2889.98 11.29 345.50 201.09 668.90 00:15:08.259 00:15:08.259 Initializing NVMe Controllers 00:15:08.259 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:08.259 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:08.259 Initialization complete. Launching workers. 00:15:08.259 ======================================================== 00:15:08.259 Latency(us) 00:15:08.259 Device Information : IOPS MiB/s Average min max 00:15:08.259 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2948.00 11.52 338.60 193.41 646.18 00:15:08.259 ======================================================== 00:15:08.259 Total : 2948.00 11.52 338.60 193.41 646.18 00:15:08.259 00:15:08.259 Initializing NVMe Controllers 00:15:08.259 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:08.259 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:08.259 Initialization complete. Launching workers. 00:15:08.259 ======================================================== 00:15:08.259 Latency(us) 00:15:08.259 Device Information : IOPS MiB/s Average min max 00:15:08.259 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3225.96 12.60 309.56 124.64 598.60 00:15:08.259 ======================================================== 00:15:08.259 Total : 3225.96 12.60 309.56 124.64 598.60 00:15:08.259 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73403 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73404 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.259 21:42:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.259 rmmod nvme_tcp 00:15:08.259 rmmod nvme_fabrics 00:15:08.259 rmmod nvme_keyring 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73383 ']' 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73383 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73383 ']' 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73383 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73383 00:15:08.517 killing process with pid 73383 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73383' 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73383 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73383 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:08.517 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:08.775 00:15:08.775 real 0m2.797s 00:15:08.775 user 0m4.711s 00:15:08.775 sys 0m1.258s 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:08.775 ************************************ 00:15:08.775 END TEST nvmf_control_msg_list 00:15:08.775 ************************************ 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:08.775 ************************************ 00:15:08.775 START TEST nvmf_wait_for_buf 00:15:08.775 ************************************ 00:15:08.775 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:09.034 * Looking for test storage... 00:15:09.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:09.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.034 --rc genhtml_branch_coverage=1 00:15:09.034 --rc genhtml_function_coverage=1 00:15:09.034 --rc genhtml_legend=1 00:15:09.034 --rc geninfo_all_blocks=1 00:15:09.034 --rc geninfo_unexecuted_blocks=1 00:15:09.034 00:15:09.034 ' 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:09.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.034 --rc genhtml_branch_coverage=1 00:15:09.034 --rc genhtml_function_coverage=1 00:15:09.034 --rc genhtml_legend=1 00:15:09.034 --rc geninfo_all_blocks=1 00:15:09.034 --rc geninfo_unexecuted_blocks=1 00:15:09.034 00:15:09.034 ' 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:09.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.034 --rc genhtml_branch_coverage=1 00:15:09.034 --rc genhtml_function_coverage=1 00:15:09.034 --rc genhtml_legend=1 00:15:09.034 --rc geninfo_all_blocks=1 00:15:09.034 --rc geninfo_unexecuted_blocks=1 00:15:09.034 00:15:09.034 ' 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:09.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.034 --rc genhtml_branch_coverage=1 00:15:09.034 --rc genhtml_function_coverage=1 00:15:09.034 --rc genhtml_legend=1 00:15:09.034 --rc geninfo_all_blocks=1 00:15:09.034 --rc geninfo_unexecuted_blocks=1 00:15:09.034 00:15:09.034 ' 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.034 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:09.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:09.035 Cannot find device "nvmf_init_br" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:09.035 Cannot find device "nvmf_init_br2" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:09.035 Cannot find device "nvmf_tgt_br" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.035 Cannot find device "nvmf_tgt_br2" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:09.035 Cannot find device "nvmf_init_br" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:09.035 Cannot find device "nvmf_init_br2" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:09.035 Cannot find device "nvmf_tgt_br" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:09.035 Cannot find device "nvmf_tgt_br2" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:09.035 Cannot find device "nvmf_br" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:09.035 Cannot find device "nvmf_init_if" 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:09.035 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:09.293 Cannot find device "nvmf_init_if2" 00:15:09.293 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:09.293 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:09.294 21:42:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:09.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:15:09.294 00:15:09.294 --- 10.0.0.3 ping statistics --- 00:15:09.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.294 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:09.294 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:09.294 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:09.294 00:15:09.294 --- 10.0.0.4 ping statistics --- 00:15:09.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.294 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:09.294 00:15:09.294 --- 10.0.0.1 ping statistics --- 00:15:09.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.294 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:09.294 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:09.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:09.552 00:15:09.552 --- 10.0.0.2 ping statistics --- 00:15:09.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.552 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73643 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73643 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73643 ']' 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.552 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.552 [2024-12-10 21:42:10.175878] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:09.552 [2024-12-10 21:42:10.175989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.552 [2024-12-10 21:42:10.325352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.811 [2024-12-10 21:42:10.373339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.811 [2024-12-10 21:42:10.373403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.811 [2024-12-10 21:42:10.373422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.811 [2024-12-10 21:42:10.373436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.811 [2024-12-10 21:42:10.373469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.811 [2024-12-10 21:42:10.373837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 [2024-12-10 21:42:10.486355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 Malloc0 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 [2024-12-10 21:42:10.526156] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 [2024-12-10 21:42:10.550279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.811 21:42:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:10.069 [2024-12-10 21:42:10.741575] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:11.444 Initializing NVMe Controllers 00:15:11.444 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:11.444 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:11.444 Initialization complete. Launching workers. 00:15:11.444 ======================================================== 00:15:11.444 Latency(us) 00:15:11.444 Device Information : IOPS MiB/s Average min max 00:15:11.444 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 502.98 62.87 7953.35 3978.99 14024.05 00:15:11.444 ======================================================== 00:15:11.444 Total : 502.98 62.87 7953.35 3978.99 14024.05 00:15:11.444 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.444 rmmod nvme_tcp 00:15:11.444 rmmod nvme_fabrics 00:15:11.444 rmmod nvme_keyring 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73643 ']' 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73643 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73643 ']' 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73643 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73643 00:15:11.444 killing process with pid 73643 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73643' 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73643 00:15:11.444 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73643 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:11.702 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:11.960 00:15:11.960 real 0m3.091s 00:15:11.960 user 0m2.457s 00:15:11.960 sys 0m0.712s 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:11.960 ************************************ 00:15:11.960 END TEST nvmf_wait_for_buf 00:15:11.960 ************************************ 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.960 ************************************ 00:15:11.960 START TEST nvmf_nsid 00:15:11.960 ************************************ 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:11.960 * Looking for test storage... 00:15:11.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:15:11.960 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:12.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.219 --rc genhtml_branch_coverage=1 00:15:12.219 --rc genhtml_function_coverage=1 00:15:12.219 --rc genhtml_legend=1 00:15:12.219 --rc geninfo_all_blocks=1 00:15:12.219 --rc geninfo_unexecuted_blocks=1 00:15:12.219 00:15:12.219 ' 00:15:12.219 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:12.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.219 --rc genhtml_branch_coverage=1 00:15:12.219 --rc genhtml_function_coverage=1 00:15:12.219 --rc genhtml_legend=1 00:15:12.220 --rc geninfo_all_blocks=1 00:15:12.220 --rc geninfo_unexecuted_blocks=1 00:15:12.220 00:15:12.220 ' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:12.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.220 --rc genhtml_branch_coverage=1 00:15:12.220 --rc genhtml_function_coverage=1 00:15:12.220 --rc genhtml_legend=1 00:15:12.220 --rc geninfo_all_blocks=1 00:15:12.220 --rc geninfo_unexecuted_blocks=1 00:15:12.220 00:15:12.220 ' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:12.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.220 --rc genhtml_branch_coverage=1 00:15:12.220 --rc genhtml_function_coverage=1 00:15:12.220 --rc genhtml_legend=1 00:15:12.220 --rc geninfo_all_blocks=1 00:15:12.220 --rc geninfo_unexecuted_blocks=1 00:15:12.220 00:15:12.220 ' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.220 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.220 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:12.221 Cannot find device "nvmf_init_br" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:12.221 Cannot find device "nvmf_init_br2" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:12.221 Cannot find device "nvmf_tgt_br" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.221 Cannot find device "nvmf_tgt_br2" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:12.221 Cannot find device "nvmf_init_br" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:12.221 Cannot find device "nvmf_init_br2" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:12.221 Cannot find device "nvmf_tgt_br" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:12.221 Cannot find device "nvmf_tgt_br2" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:12.221 Cannot find device "nvmf_br" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:12.221 Cannot find device "nvmf_init_if" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:12.221 Cannot find device "nvmf_init_if2" 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.221 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:12.479 21:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:12.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:12.479 00:15:12.479 --- 10.0.0.3 ping statistics --- 00:15:12.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.479 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:12.479 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:12.479 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:12.479 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:15:12.479 00:15:12.479 --- 10.0.0.4 ping statistics --- 00:15:12.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.479 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:12.737 00:15:12.737 --- 10.0.0.1 ping statistics --- 00:15:12.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.737 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:12.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:12.737 00:15:12.737 --- 10.0.0.2 ping statistics --- 00:15:12.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.737 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73904 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73904 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73904 ']' 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.737 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:12.737 [2024-12-10 21:42:13.368433] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:12.737 [2024-12-10 21:42:13.368552] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.996 [2024-12-10 21:42:13.538252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.996 [2024-12-10 21:42:13.584739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.996 [2024-12-10 21:42:13.584809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.996 [2024-12-10 21:42:13.584824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.996 [2024-12-10 21:42:13.584836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.996 [2024-12-10 21:42:13.584848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.996 [2024-12-10 21:42:13.585363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.996 [2024-12-10 21:42:13.620738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73923 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=f8b57dc6-0bbd-4555-9b3a-45fa07dfa9cb 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=08016fb1-0b10-47a6-a5c1-2110dfd65aed 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=df388bc2-f734-4297-ba6c-d0bcae070c22 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.996 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:12.996 null0 00:15:12.996 null1 00:15:12.996 null2 00:15:12.996 [2024-12-10 21:42:13.760700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.255 [2024-12-10 21:42:13.784855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.255 [2024-12-10 21:42:13.788097] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:13.255 [2024-12-10 21:42:13.788188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73923 ] 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73923 /var/tmp/tgt2.sock 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73923 ']' 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.255 21:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:13.255 [2024-12-10 21:42:13.941590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.255 [2024-12-10 21:42:13.981412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.255 [2024-12-10 21:42:14.028379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.515 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.515 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:13.515 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:14.081 [2024-12-10 21:42:14.596871] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.081 [2024-12-10 21:42:14.612986] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:15:14.081 nvme0n1 nvme0n2 00:15:14.081 nvme1n1 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:15:14.081 21:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid f8b57dc6-0bbd-4555-9b3a-45fa07dfa9cb 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:15:15.068 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f8b57dc60bbd45559b3a45fa07dfa9cb 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F8B57DC60BBD45559B3A45FA07DFA9CB 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ F8B57DC60BBD45559B3A45FA07DFA9CB == \F\8\B\5\7\D\C\6\0\B\B\D\4\5\5\5\9\B\3\A\4\5\F\A\0\7\D\F\A\9\C\B ]] 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:15:15.326 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 08016fb1-0b10-47a6-a5c1-2110dfd65aed 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=08016fb10b1047a6a5c12110dfd65aed 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 08016FB10B1047A6A5C12110DFD65AED 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 08016FB10B1047A6A5C12110DFD65AED == \0\8\0\1\6\F\B\1\0\B\1\0\4\7\A\6\A\5\C\1\2\1\1\0\D\F\D\6\5\A\E\D ]] 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid df388bc2-f734-4297-ba6c-d0bcae070c22 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:15:15.327 21:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:15.327 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=df388bc2f7344297ba6cd0bcae070c22 00:15:15.327 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DF388BC2F7344297BA6CD0BCAE070C22 00:15:15.327 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DF388BC2F7344297BA6CD0BCAE070C22 == \D\F\3\8\8\B\C\2\F\7\3\4\4\2\9\7\B\A\6\C\D\0\B\C\A\E\0\7\0\C\2\2 ]] 00:15:15.327 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73923 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73923 ']' 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73923 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73923 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:15.585 killing process with pid 73923 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73923' 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73923 00:15:15.585 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73923 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:15.843 rmmod nvme_tcp 00:15:15.843 rmmod nvme_fabrics 00:15:15.843 rmmod nvme_keyring 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73904 ']' 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73904 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73904 ']' 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73904 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73904 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.843 killing process with pid 73904 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73904' 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73904 00:15:15.843 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73904 00:15:16.100 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:16.100 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:16.100 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:16.100 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:16.100 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:16.101 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:16.359 00:15:16.359 real 0m4.320s 00:15:16.359 user 0m6.469s 00:15:16.359 sys 0m1.497s 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.359 21:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:16.359 ************************************ 00:15:16.359 END TEST nvmf_nsid 00:15:16.359 ************************************ 00:15:16.359 21:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:16.359 00:15:16.359 real 5m21.695s 00:15:16.359 user 11m30.625s 00:15:16.359 sys 1m7.660s 00:15:16.359 21:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.359 21:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.359 ************************************ 00:15:16.359 END TEST nvmf_target_extra 00:15:16.359 ************************************ 00:15:16.359 21:42:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:16.359 21:42:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:16.359 21:42:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.359 21:42:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.359 ************************************ 00:15:16.359 START TEST nvmf_host 00:15:16.359 ************************************ 00:15:16.359 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:16.359 * Looking for test storage... 00:15:16.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:16.359 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:16.359 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:16.359 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:16.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.619 --rc genhtml_branch_coverage=1 00:15:16.619 --rc genhtml_function_coverage=1 00:15:16.619 --rc genhtml_legend=1 00:15:16.619 --rc geninfo_all_blocks=1 00:15:16.619 --rc geninfo_unexecuted_blocks=1 00:15:16.619 00:15:16.619 ' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:16.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.619 --rc genhtml_branch_coverage=1 00:15:16.619 --rc genhtml_function_coverage=1 00:15:16.619 --rc genhtml_legend=1 00:15:16.619 --rc geninfo_all_blocks=1 00:15:16.619 --rc geninfo_unexecuted_blocks=1 00:15:16.619 00:15:16.619 ' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:16.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.619 --rc genhtml_branch_coverage=1 00:15:16.619 --rc genhtml_function_coverage=1 00:15:16.619 --rc genhtml_legend=1 00:15:16.619 --rc geninfo_all_blocks=1 00:15:16.619 --rc geninfo_unexecuted_blocks=1 00:15:16.619 00:15:16.619 ' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:16.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.619 --rc genhtml_branch_coverage=1 00:15:16.619 --rc genhtml_function_coverage=1 00:15:16.619 --rc genhtml_legend=1 00:15:16.619 --rc geninfo_all_blocks=1 00:15:16.619 --rc geninfo_unexecuted_blocks=1 00:15:16.619 00:15:16.619 ' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.619 21:42:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.620 ************************************ 00:15:16.620 START TEST nvmf_identify 00:15:16.620 ************************************ 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:16.620 * Looking for test storage... 00:15:16.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:16.620 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.879 --rc genhtml_branch_coverage=1 00:15:16.879 --rc genhtml_function_coverage=1 00:15:16.879 --rc genhtml_legend=1 00:15:16.879 --rc geninfo_all_blocks=1 00:15:16.879 --rc geninfo_unexecuted_blocks=1 00:15:16.879 00:15:16.879 ' 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.879 --rc genhtml_branch_coverage=1 00:15:16.879 --rc genhtml_function_coverage=1 00:15:16.879 --rc genhtml_legend=1 00:15:16.879 --rc geninfo_all_blocks=1 00:15:16.879 --rc geninfo_unexecuted_blocks=1 00:15:16.879 00:15:16.879 ' 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.879 --rc genhtml_branch_coverage=1 00:15:16.879 --rc genhtml_function_coverage=1 00:15:16.879 --rc genhtml_legend=1 00:15:16.879 --rc geninfo_all_blocks=1 00:15:16.879 --rc geninfo_unexecuted_blocks=1 00:15:16.879 00:15:16.879 ' 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.879 --rc genhtml_branch_coverage=1 00:15:16.879 --rc genhtml_function_coverage=1 00:15:16.879 --rc genhtml_legend=1 00:15:16.879 --rc geninfo_all_blocks=1 00:15:16.879 --rc geninfo_unexecuted_blocks=1 00:15:16.879 00:15:16.879 ' 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.879 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:16.880 Cannot find device "nvmf_init_br" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:16.880 Cannot find device "nvmf_init_br2" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:16.880 Cannot find device "nvmf_tgt_br" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.880 Cannot find device "nvmf_tgt_br2" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:16.880 Cannot find device "nvmf_init_br" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:16.880 Cannot find device "nvmf_init_br2" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:16.880 Cannot find device "nvmf_tgt_br" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:16.880 Cannot find device "nvmf_tgt_br2" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:16.880 Cannot find device "nvmf_br" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:16.880 Cannot find device "nvmf_init_if" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:16.880 Cannot find device "nvmf_init_if2" 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.880 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:17.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:15:17.139 00:15:17.139 --- 10.0.0.3 ping statistics --- 00:15:17.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.139 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:17.139 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:17.139 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:17.139 00:15:17.139 --- 10.0.0.4 ping statistics --- 00:15:17.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.139 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:17.139 00:15:17.139 --- 10.0.0.1 ping statistics --- 00:15:17.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.139 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:17.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:15:17.139 00:15:17.139 --- 10.0.0.2 ping statistics --- 00:15:17.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.139 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74275 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74275 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74275 ']' 00:15:17.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.139 21:42:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.397 [2024-12-10 21:42:17.942406] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:17.397 [2024-12-10 21:42:17.942819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.397 [2024-12-10 21:42:18.100328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.397 [2024-12-10 21:42:18.134615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.397 [2024-12-10 21:42:18.134671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.397 [2024-12-10 21:42:18.134682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.397 [2024-12-10 21:42:18.134690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.397 [2024-12-10 21:42:18.134697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.397 [2024-12-10 21:42:18.135429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.397 [2024-12-10 21:42:18.135650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.397 [2024-12-10 21:42:18.136060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.397 [2024-12-10 21:42:18.136070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.397 [2024-12-10 21:42:18.166481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 [2024-12-10 21:42:18.223921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 Malloc0 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 [2024-12-10 21:42:18.321252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.669 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:17.669 [ 00:15:17.669 { 00:15:17.669 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.669 "subtype": "Discovery", 00:15:17.669 "listen_addresses": [ 00:15:17.669 { 00:15:17.669 "trtype": "TCP", 00:15:17.669 "adrfam": "IPv4", 00:15:17.669 "traddr": "10.0.0.3", 00:15:17.669 "trsvcid": "4420" 00:15:17.669 } 00:15:17.669 ], 00:15:17.669 "allow_any_host": true, 00:15:17.669 "hosts": [] 00:15:17.669 }, 00:15:17.669 { 00:15:17.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.669 "subtype": "NVMe", 00:15:17.669 "listen_addresses": [ 00:15:17.669 { 00:15:17.669 "trtype": "TCP", 00:15:17.669 "adrfam": "IPv4", 00:15:17.669 "traddr": "10.0.0.3", 00:15:17.669 "trsvcid": "4420" 00:15:17.669 } 00:15:17.669 ], 00:15:17.669 "allow_any_host": true, 00:15:17.669 "hosts": [], 00:15:17.669 "serial_number": "SPDK00000000000001", 00:15:17.669 "model_number": "SPDK bdev Controller", 00:15:17.669 "max_namespaces": 32, 00:15:17.669 "min_cntlid": 1, 00:15:17.669 "max_cntlid": 65519, 00:15:17.669 "namespaces": [ 00:15:17.669 { 00:15:17.669 "nsid": 1, 00:15:17.670 "bdev_name": "Malloc0", 00:15:17.670 "name": "Malloc0", 00:15:17.670 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:17.670 "eui64": "ABCDEF0123456789", 00:15:17.670 "uuid": "c0f1ae55-8661-42dc-bc0c-e8ef32a78290" 00:15:17.670 } 00:15:17.670 ] 00:15:17.670 } 00:15:17.670 ] 00:15:17.670 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.670 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:17.670 [2024-12-10 21:42:18.378662] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:17.670 [2024-12-10 21:42:18.378906] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74303 ] 00:15:17.954 [2024-12-10 21:42:18.545949] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:17.954 [2024-12-10 21:42:18.546023] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:17.954 [2024-12-10 21:42:18.546031] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:17.954 [2024-12-10 21:42:18.546046] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:17.954 [2024-12-10 21:42:18.546058] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:17.954 [2024-12-10 21:42:18.546427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:17.954 [2024-12-10 21:42:18.546494] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c60750 0 00:15:17.954 [2024-12-10 21:42:18.560477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:17.954 [2024-12-10 21:42:18.560525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:17.954 [2024-12-10 21:42:18.560533] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:17.954 [2024-12-10 21:42:18.560537] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:17.954 [2024-12-10 21:42:18.560585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.560593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.560598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.954 [2024-12-10 21:42:18.560615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:17.954 [2024-12-10 21:42:18.560668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.954 [2024-12-10 21:42:18.568472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.954 [2024-12-10 21:42:18.568501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.954 [2024-12-10 21:42:18.568506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.954 [2024-12-10 21:42:18.568527] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:17.954 [2024-12-10 21:42:18.568539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:17.954 [2024-12-10 21:42:18.568548] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:17.954 [2024-12-10 21:42:18.568572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.954 [2024-12-10 21:42:18.568597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-10 21:42:18.568634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.954 [2024-12-10 21:42:18.568710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.954 [2024-12-10 21:42:18.568718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.954 [2024-12-10 21:42:18.568722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.954 [2024-12-10 21:42:18.568737] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:17.954 [2024-12-10 21:42:18.568746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:17.954 [2024-12-10 21:42:18.568755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.954 [2024-12-10 21:42:18.568771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-10 21:42:18.568793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.954 [2024-12-10 21:42:18.568836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.954 [2024-12-10 21:42:18.568843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.954 [2024-12-10 21:42:18.568847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.954 [2024-12-10 21:42:18.568858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:17.954 [2024-12-10 21:42:18.568868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:17.954 [2024-12-10 21:42:18.568876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.954 [2024-12-10 21:42:18.568892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-10 21:42:18.568911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.954 [2024-12-10 21:42:18.568959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.954 [2024-12-10 21:42:18.568966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.954 [2024-12-10 21:42:18.568970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.954 [2024-12-10 21:42:18.568981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:17.954 [2024-12-10 21:42:18.568992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.568997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.569001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.954 [2024-12-10 21:42:18.569008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-10 21:42:18.569027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.954 [2024-12-10 21:42:18.569072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.954 [2024-12-10 21:42:18.569079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.954 [2024-12-10 21:42:18.569083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.569087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.954 [2024-12-10 21:42:18.569093] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:17.954 [2024-12-10 21:42:18.569098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:17.954 [2024-12-10 21:42:18.569107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:17.954 [2024-12-10 21:42:18.569220] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:17.954 [2024-12-10 21:42:18.569245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:17.954 [2024-12-10 21:42:18.569256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.569261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.954 [2024-12-10 21:42:18.569265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.954 [2024-12-10 21:42:18.569274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-10 21:42:18.569299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.954 [2024-12-10 21:42:18.569348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.955 [2024-12-10 21:42:18.569356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.955 [2024-12-10 21:42:18.569360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.955 [2024-12-10 21:42:18.569371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:17.955 [2024-12-10 21:42:18.569382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.569399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.955 [2024-12-10 21:42:18.569418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.955 [2024-12-10 21:42:18.569479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.955 [2024-12-10 21:42:18.569488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.955 [2024-12-10 21:42:18.569492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.955 [2024-12-10 21:42:18.569502] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:17.955 [2024-12-10 21:42:18.569508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:17.955 [2024-12-10 21:42:18.569517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:17.955 [2024-12-10 21:42:18.569528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:17.955 [2024-12-10 21:42:18.569542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.569555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.955 [2024-12-10 21:42:18.569577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.955 [2024-12-10 21:42:18.569668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:17.955 [2024-12-10 21:42:18.569676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:17.955 [2024-12-10 21:42:18.569680] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569685] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c60750): datao=0, datal=4096, cccid=0 00:15:17.955 [2024-12-10 21:42:18.569690] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc4740) on tqpair(0x1c60750): expected_datao=0, payload_size=4096 00:15:17.955 [2024-12-10 21:42:18.569695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569706] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569711] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.955 [2024-12-10 21:42:18.569727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.955 [2024-12-10 21:42:18.569731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.955 [2024-12-10 21:42:18.569746] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:17.955 [2024-12-10 21:42:18.569752] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:17.955 [2024-12-10 21:42:18.569757] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:17.955 [2024-12-10 21:42:18.569763] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:17.955 [2024-12-10 21:42:18.569769] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:17.955 [2024-12-10 21:42:18.569774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:17.955 [2024-12-10 21:42:18.569784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:17.955 [2024-12-10 21:42:18.569792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.569809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:17.955 [2024-12-10 21:42:18.569830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.955 [2024-12-10 21:42:18.569884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.955 [2024-12-10 21:42:18.569891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.955 [2024-12-10 21:42:18.569895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.955 [2024-12-10 21:42:18.569908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.569923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.955 [2024-12-10 21:42:18.569931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.569945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.955 [2024-12-10 21:42:18.569952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.569967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.955 [2024-12-10 21:42:18.569973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.569981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.569988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.955 [2024-12-10 21:42:18.569993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:17.955 [2024-12-10 21:42:18.570007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:17.955 [2024-12-10 21:42:18.570016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.570028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.955 [2024-12-10 21:42:18.570049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4740, cid 0, qid 0 00:15:17.955 [2024-12-10 21:42:18.570056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc48c0, cid 1, qid 0 00:15:17.955 [2024-12-10 21:42:18.570062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4a40, cid 2, qid 0 00:15:17.955 [2024-12-10 21:42:18.570068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.955 [2024-12-10 21:42:18.570073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4d40, cid 4, qid 0 00:15:17.955 [2024-12-10 21:42:18.570158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.955 [2024-12-10 21:42:18.570165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.955 [2024-12-10 21:42:18.570169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4d40) on tqpair=0x1c60750 00:15:17.955 [2024-12-10 21:42:18.570179] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:17.955 [2024-12-10 21:42:18.570185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:17.955 [2024-12-10 21:42:18.570197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c60750) 00:15:17.955 [2024-12-10 21:42:18.570210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.955 [2024-12-10 21:42:18.570228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4d40, cid 4, qid 0 00:15:17.955 [2024-12-10 21:42:18.570285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:17.955 [2024-12-10 21:42:18.570299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:17.955 [2024-12-10 21:42:18.570303] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570308] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c60750): datao=0, datal=4096, cccid=4 00:15:17.955 [2024-12-10 21:42:18.570313] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc4d40) on tqpair(0x1c60750): expected_datao=0, payload_size=4096 00:15:17.955 [2024-12-10 21:42:18.570318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570326] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570330] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.955 [2024-12-10 21:42:18.570346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.955 [2024-12-10 21:42:18.570350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4d40) on tqpair=0x1c60750 00:15:17.955 [2024-12-10 21:42:18.570369] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:17.955 [2024-12-10 21:42:18.570401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.955 [2024-12-10 21:42:18.570407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c60750) 00:15:17.956 [2024-12-10 21:42:18.570415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.956 [2024-12-10 21:42:18.570424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c60750) 00:15:17.956 [2024-12-10 21:42:18.570439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.956 [2024-12-10 21:42:18.570488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4d40, cid 4, qid 0 00:15:17.956 [2024-12-10 21:42:18.570497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4ec0, cid 5, qid 0 00:15:17.956 [2024-12-10 21:42:18.570609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:17.956 [2024-12-10 21:42:18.570616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:17.956 [2024-12-10 21:42:18.570621] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c60750): datao=0, datal=1024, cccid=4 00:15:17.956 [2024-12-10 21:42:18.570630] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc4d40) on tqpair(0x1c60750): expected_datao=0, payload_size=1024 00:15:17.956 [2024-12-10 21:42:18.570635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570642] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570646] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.956 [2024-12-10 21:42:18.570659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.956 [2024-12-10 21:42:18.570663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4ec0) on tqpair=0x1c60750 00:15:17.956 [2024-12-10 21:42:18.570686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.956 [2024-12-10 21:42:18.570694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.956 [2024-12-10 21:42:18.570698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4d40) on tqpair=0x1c60750 00:15:17.956 [2024-12-10 21:42:18.570716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c60750) 00:15:17.956 [2024-12-10 21:42:18.570729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.956 [2024-12-10 21:42:18.570753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4d40, cid 4, qid 0 00:15:17.956 [2024-12-10 21:42:18.570821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:17.956 [2024-12-10 21:42:18.570829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:17.956 [2024-12-10 21:42:18.570833] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570837] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c60750): datao=0, datal=3072, cccid=4 00:15:17.956 [2024-12-10 21:42:18.570842] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc4d40) on tqpair(0x1c60750): expected_datao=0, payload_size=3072 00:15:17.956 [2024-12-10 21:42:18.570847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570854] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570858] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.956 [2024-12-10 21:42:18.570874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.956 [2024-12-10 21:42:18.570878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4d40) on tqpair=0x1c60750 00:15:17.956 [2024-12-10 21:42:18.570893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.570898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c60750) 00:15:17.956 [2024-12-10 21:42:18.570906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.956 [2024-12-10 21:42:18.570934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4d40, cid 4, qid 0 00:15:17.956 [2024-12-10 21:42:18.570995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:17.956 [2024-12-10 21:42:18.571014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:17.956 [2024-12-10 21:42:18.571019] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.571023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c60750): datao=0, datal=8, cccid=4 00:15:17.956 [2024-12-10 21:42:18.571028] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cc4d40) on tqpair(0x1c60750): expected_datao=0, payload_size=8 00:15:17.956 [2024-12-10 21:42:18.571033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.571040] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.571044] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.571061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.956 [2024-12-10 21:42:18.571070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.956 [2024-12-10 21:42:18.571074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.956 [2024-12-10 21:42:18.571078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4d40) on tqpair=0x1c60750 00:15:17.956 ===================================================== 00:15:17.956 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:17.956 ===================================================== 00:15:17.956 Controller Capabilities/Features 00:15:17.956 ================================ 00:15:17.956 Vendor ID: 0000 00:15:17.956 Subsystem Vendor ID: 0000 00:15:17.956 Serial Number: .................... 00:15:17.956 Model Number: ........................................ 00:15:17.956 Firmware Version: 25.01 00:15:17.956 Recommended Arb Burst: 0 00:15:17.956 IEEE OUI Identifier: 00 00 00 00:15:17.956 Multi-path I/O 00:15:17.956 May have multiple subsystem ports: No 00:15:17.956 May have multiple controllers: No 00:15:17.956 Associated with SR-IOV VF: No 00:15:17.956 Max Data Transfer Size: 131072 00:15:17.956 Max Number of Namespaces: 0 00:15:17.956 Max Number of I/O Queues: 1024 00:15:17.956 NVMe Specification Version (VS): 1.3 00:15:17.956 NVMe Specification Version (Identify): 1.3 00:15:17.956 Maximum Queue Entries: 128 00:15:17.956 Contiguous Queues Required: Yes 00:15:17.956 Arbitration Mechanisms Supported 00:15:17.956 Weighted Round Robin: Not Supported 00:15:17.956 Vendor Specific: Not Supported 00:15:17.956 Reset Timeout: 15000 ms 00:15:17.956 Doorbell Stride: 4 bytes 00:15:17.956 NVM Subsystem Reset: Not Supported 00:15:17.956 Command Sets Supported 00:15:17.956 NVM Command Set: Supported 00:15:17.956 Boot Partition: Not Supported 00:15:17.956 Memory Page Size Minimum: 4096 bytes 00:15:17.956 Memory Page Size Maximum: 4096 bytes 00:15:17.956 Persistent Memory Region: Not Supported 00:15:17.956 Optional Asynchronous Events Supported 00:15:17.956 Namespace Attribute Notices: Not Supported 00:15:17.956 Firmware Activation Notices: Not Supported 00:15:17.956 ANA Change Notices: Not Supported 00:15:17.956 PLE Aggregate Log Change Notices: Not Supported 00:15:17.956 LBA Status Info Alert Notices: Not Supported 00:15:17.956 EGE Aggregate Log Change Notices: Not Supported 00:15:17.956 Normal NVM Subsystem Shutdown event: Not Supported 00:15:17.956 Zone Descriptor Change Notices: Not Supported 00:15:17.956 Discovery Log Change Notices: Supported 00:15:17.956 Controller Attributes 00:15:17.956 128-bit Host Identifier: Not Supported 00:15:17.956 Non-Operational Permissive Mode: Not Supported 00:15:17.956 NVM Sets: Not Supported 00:15:17.956 Read Recovery Levels: Not Supported 00:15:17.956 Endurance Groups: Not Supported 00:15:17.956 Predictable Latency Mode: Not Supported 00:15:17.956 Traffic Based Keep ALive: Not Supported 00:15:17.956 Namespace Granularity: Not Supported 00:15:17.956 SQ Associations: Not Supported 00:15:17.956 UUID List: Not Supported 00:15:17.956 Multi-Domain Subsystem: Not Supported 00:15:17.956 Fixed Capacity Management: Not Supported 00:15:17.956 Variable Capacity Management: Not Supported 00:15:17.956 Delete Endurance Group: Not Supported 00:15:17.956 Delete NVM Set: Not Supported 00:15:17.956 Extended LBA Formats Supported: Not Supported 00:15:17.956 Flexible Data Placement Supported: Not Supported 00:15:17.956 00:15:17.956 Controller Memory Buffer Support 00:15:17.956 ================================ 00:15:17.956 Supported: No 00:15:17.956 00:15:17.956 Persistent Memory Region Support 00:15:17.956 ================================ 00:15:17.956 Supported: No 00:15:17.956 00:15:17.956 Admin Command Set Attributes 00:15:17.956 ============================ 00:15:17.956 Security Send/Receive: Not Supported 00:15:17.956 Format NVM: Not Supported 00:15:17.956 Firmware Activate/Download: Not Supported 00:15:17.956 Namespace Management: Not Supported 00:15:17.956 Device Self-Test: Not Supported 00:15:17.956 Directives: Not Supported 00:15:17.956 NVMe-MI: Not Supported 00:15:17.956 Virtualization Management: Not Supported 00:15:17.956 Doorbell Buffer Config: Not Supported 00:15:17.956 Get LBA Status Capability: Not Supported 00:15:17.956 Command & Feature Lockdown Capability: Not Supported 00:15:17.956 Abort Command Limit: 1 00:15:17.956 Async Event Request Limit: 4 00:15:17.956 Number of Firmware Slots: N/A 00:15:17.956 Firmware Slot 1 Read-Only: N/A 00:15:17.956 Firmware Activation Without Reset: N/A 00:15:17.956 Multiple Update Detection Support: N/A 00:15:17.956 Firmware Update Granularity: No Information Provided 00:15:17.956 Per-Namespace SMART Log: No 00:15:17.956 Asymmetric Namespace Access Log Page: Not Supported 00:15:17.957 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:17.957 Command Effects Log Page: Not Supported 00:15:17.957 Get Log Page Extended Data: Supported 00:15:17.957 Telemetry Log Pages: Not Supported 00:15:17.957 Persistent Event Log Pages: Not Supported 00:15:17.957 Supported Log Pages Log Page: May Support 00:15:17.957 Commands Supported & Effects Log Page: Not Supported 00:15:17.957 Feature Identifiers & Effects Log Page:May Support 00:15:17.957 NVMe-MI Commands & Effects Log Page: May Support 00:15:17.957 Data Area 4 for Telemetry Log: Not Supported 00:15:17.957 Error Log Page Entries Supported: 128 00:15:17.957 Keep Alive: Not Supported 00:15:17.957 00:15:17.957 NVM Command Set Attributes 00:15:17.957 ========================== 00:15:17.957 Submission Queue Entry Size 00:15:17.957 Max: 1 00:15:17.957 Min: 1 00:15:17.957 Completion Queue Entry Size 00:15:17.957 Max: 1 00:15:17.957 Min: 1 00:15:17.957 Number of Namespaces: 0 00:15:17.957 Compare Command: Not Supported 00:15:17.957 Write Uncorrectable Command: Not Supported 00:15:17.957 Dataset Management Command: Not Supported 00:15:17.957 Write Zeroes Command: Not Supported 00:15:17.957 Set Features Save Field: Not Supported 00:15:17.957 Reservations: Not Supported 00:15:17.957 Timestamp: Not Supported 00:15:17.957 Copy: Not Supported 00:15:17.957 Volatile Write Cache: Not Present 00:15:17.957 Atomic Write Unit (Normal): 1 00:15:17.957 Atomic Write Unit (PFail): 1 00:15:17.957 Atomic Compare & Write Unit: 1 00:15:17.957 Fused Compare & Write: Supported 00:15:17.957 Scatter-Gather List 00:15:17.957 SGL Command Set: Supported 00:15:17.957 SGL Keyed: Supported 00:15:17.957 SGL Bit Bucket Descriptor: Not Supported 00:15:17.957 SGL Metadata Pointer: Not Supported 00:15:17.957 Oversized SGL: Not Supported 00:15:17.957 SGL Metadata Address: Not Supported 00:15:17.957 SGL Offset: Supported 00:15:17.957 Transport SGL Data Block: Not Supported 00:15:17.957 Replay Protected Memory Block: Not Supported 00:15:17.957 00:15:17.957 Firmware Slot Information 00:15:17.957 ========================= 00:15:17.957 Active slot: 0 00:15:17.957 00:15:17.957 00:15:17.957 Error Log 00:15:17.957 ========= 00:15:17.957 00:15:17.957 Active Namespaces 00:15:17.957 ================= 00:15:17.957 Discovery Log Page 00:15:17.957 ================== 00:15:17.957 Generation Counter: 2 00:15:17.957 Number of Records: 2 00:15:17.957 Record Format: 0 00:15:17.957 00:15:17.957 Discovery Log Entry 0 00:15:17.957 ---------------------- 00:15:17.957 Transport Type: 3 (TCP) 00:15:17.957 Address Family: 1 (IPv4) 00:15:17.957 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:17.957 Entry Flags: 00:15:17.957 Duplicate Returned Information: 1 00:15:17.957 Explicit Persistent Connection Support for Discovery: 1 00:15:17.957 Transport Requirements: 00:15:17.957 Secure Channel: Not Required 00:15:17.957 Port ID: 0 (0x0000) 00:15:17.957 Controller ID: 65535 (0xffff) 00:15:17.957 Admin Max SQ Size: 128 00:15:17.957 Transport Service Identifier: 4420 00:15:17.957 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:17.957 Transport Address: 10.0.0.3 00:15:17.957 Discovery Log Entry 1 00:15:17.957 ---------------------- 00:15:17.957 Transport Type: 3 (TCP) 00:15:17.957 Address Family: 1 (IPv4) 00:15:17.957 Subsystem Type: 2 (NVM Subsystem) 00:15:17.957 Entry Flags: 00:15:17.957 Duplicate Returned Information: 0 00:15:17.957 Explicit Persistent Connection Support for Discovery: 0 00:15:17.957 Transport Requirements: 00:15:17.957 Secure Channel: Not Required 00:15:17.957 Port ID: 0 (0x0000) 00:15:17.957 Controller ID: 65535 (0xffff) 00:15:17.957 Admin Max SQ Size: 128 00:15:17.957 Transport Service Identifier: 4420 00:15:17.957 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:17.957 Transport Address: 10.0.0.3 [2024-12-10 21:42:18.571178] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:17.957 [2024-12-10 21:42:18.571193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4740) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.957 [2024-12-10 21:42:18.571206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc48c0) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.957 [2024-12-10 21:42:18.571217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4a40) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.957 [2024-12-10 21:42:18.571228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.957 [2024-12-10 21:42:18.571246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.957 [2024-12-10 21:42:18.571263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.957 [2024-12-10 21:42:18.571286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.957 [2024-12-10 21:42:18.571338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.957 [2024-12-10 21:42:18.571345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.957 [2024-12-10 21:42:18.571349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.957 [2024-12-10 21:42:18.571379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.957 [2024-12-10 21:42:18.571401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.957 [2024-12-10 21:42:18.571483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.957 [2024-12-10 21:42:18.571492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.957 [2024-12-10 21:42:18.571496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571506] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:17.957 [2024-12-10 21:42:18.571511] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:17.957 [2024-12-10 21:42:18.571522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.957 [2024-12-10 21:42:18.571539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.957 [2024-12-10 21:42:18.571560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.957 [2024-12-10 21:42:18.571613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.957 [2024-12-10 21:42:18.571620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.957 [2024-12-10 21:42:18.571624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.957 [2024-12-10 21:42:18.571657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.957 [2024-12-10 21:42:18.571676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.957 [2024-12-10 21:42:18.571730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.957 [2024-12-10 21:42:18.571743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.957 [2024-12-10 21:42:18.571747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.957 [2024-12-10 21:42:18.571780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.957 [2024-12-10 21:42:18.571799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.957 [2024-12-10 21:42:18.571849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.957 [2024-12-10 21:42:18.571856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.957 [2024-12-10 21:42:18.571860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.957 [2024-12-10 21:42:18.571876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.957 [2024-12-10 21:42:18.571881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.571885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.958 [2024-12-10 21:42:18.571893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.958 [2024-12-10 21:42:18.571911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.958 [2024-12-10 21:42:18.571964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.958 [2024-12-10 21:42:18.571971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.958 [2024-12-10 21:42:18.571975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.571979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.958 [2024-12-10 21:42:18.571991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.571996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.958 [2024-12-10 21:42:18.572007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.958 [2024-12-10 21:42:18.572025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.958 [2024-12-10 21:42:18.572075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.958 [2024-12-10 21:42:18.572082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.958 [2024-12-10 21:42:18.572086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.958 [2024-12-10 21:42:18.572102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.958 [2024-12-10 21:42:18.572118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.958 [2024-12-10 21:42:18.572136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.958 [2024-12-10 21:42:18.572179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.958 [2024-12-10 21:42:18.572187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.958 [2024-12-10 21:42:18.572191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.958 [2024-12-10 21:42:18.572206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.958 [2024-12-10 21:42:18.572223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.958 [2024-12-10 21:42:18.572241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.958 [2024-12-10 21:42:18.572287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.958 [2024-12-10 21:42:18.572309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.958 [2024-12-10 21:42:18.572314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.958 [2024-12-10 21:42:18.572330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.958 [2024-12-10 21:42:18.572347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.958 [2024-12-10 21:42:18.572365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.958 [2024-12-10 21:42:18.572412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.958 [2024-12-10 21:42:18.572419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.958 [2024-12-10 21:42:18.572423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.572428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.958 [2024-12-10 21:42:18.572439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.576468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.576478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c60750) 00:15:17.958 [2024-12-10 21:42:18.576490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.958 [2024-12-10 21:42:18.576520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cc4bc0, cid 3, qid 0 00:15:17.958 [2024-12-10 21:42:18.576572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:17.958 [2024-12-10 21:42:18.576580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:17.958 [2024-12-10 21:42:18.576584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:17.958 [2024-12-10 21:42:18.576589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cc4bc0) on tqpair=0x1c60750 00:15:17.958 [2024-12-10 21:42:18.576599] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:15:17.958 00:15:17.958 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:17.958 [2024-12-10 21:42:18.619416] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:17.958 [2024-12-10 21:42:18.619646] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74309 ] 00:15:18.222 [2024-12-10 21:42:18.786961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:18.222 [2024-12-10 21:42:18.787040] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:18.222 [2024-12-10 21:42:18.787048] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:18.222 [2024-12-10 21:42:18.787064] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:18.222 [2024-12-10 21:42:18.787074] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:18.222 [2024-12-10 21:42:18.787378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:18.222 [2024-12-10 21:42:18.787441] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c3d750 0 00:15:18.222 [2024-12-10 21:42:18.801461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:18.222 [2024-12-10 21:42:18.801491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:18.222 [2024-12-10 21:42:18.801498] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:18.222 [2024-12-10 21:42:18.801502] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:18.222 [2024-12-10 21:42:18.801538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.801546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.801551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.222 [2024-12-10 21:42:18.801565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:18.222 [2024-12-10 21:42:18.801598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.222 [2024-12-10 21:42:18.809476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.222 [2024-12-10 21:42:18.809502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.222 [2024-12-10 21:42:18.809508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.222 [2024-12-10 21:42:18.809528] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:18.222 [2024-12-10 21:42:18.809537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:18.222 [2024-12-10 21:42:18.809544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:18.222 [2024-12-10 21:42:18.809565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.222 [2024-12-10 21:42:18.809585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.222 [2024-12-10 21:42:18.809615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.222 [2024-12-10 21:42:18.809669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.222 [2024-12-10 21:42:18.809676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.222 [2024-12-10 21:42:18.809680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.222 [2024-12-10 21:42:18.809695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:18.222 [2024-12-10 21:42:18.809705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:18.222 [2024-12-10 21:42:18.809713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.222 [2024-12-10 21:42:18.809730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.222 [2024-12-10 21:42:18.809751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.222 [2024-12-10 21:42:18.809801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.222 [2024-12-10 21:42:18.809808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.222 [2024-12-10 21:42:18.809812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.222 [2024-12-10 21:42:18.809823] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:18.222 [2024-12-10 21:42:18.809832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:18.222 [2024-12-10 21:42:18.809839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.222 [2024-12-10 21:42:18.809856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.222 [2024-12-10 21:42:18.809875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.222 [2024-12-10 21:42:18.809918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.222 [2024-12-10 21:42:18.809925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.222 [2024-12-10 21:42:18.809929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.222 [2024-12-10 21:42:18.809940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:18.222 [2024-12-10 21:42:18.809950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.809959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.222 [2024-12-10 21:42:18.809967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.222 [2024-12-10 21:42:18.809985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.222 [2024-12-10 21:42:18.810027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.222 [2024-12-10 21:42:18.810034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.222 [2024-12-10 21:42:18.810038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.810042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.222 [2024-12-10 21:42:18.810048] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:18.222 [2024-12-10 21:42:18.810053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:18.222 [2024-12-10 21:42:18.810062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:18.222 [2024-12-10 21:42:18.810173] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:18.222 [2024-12-10 21:42:18.810193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:18.222 [2024-12-10 21:42:18.810204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.810209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.222 [2024-12-10 21:42:18.810213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.222 [2024-12-10 21:42:18.810221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.222 [2024-12-10 21:42:18.810243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.222 [2024-12-10 21:42:18.810288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.222 [2024-12-10 21:42:18.810296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.223 [2024-12-10 21:42:18.810299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.223 [2024-12-10 21:42:18.810309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:18.223 [2024-12-10 21:42:18.810320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.223 [2024-12-10 21:42:18.810354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.223 [2024-12-10 21:42:18.810398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.223 [2024-12-10 21:42:18.810405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.223 [2024-12-10 21:42:18.810409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.223 [2024-12-10 21:42:18.810419] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:18.223 [2024-12-10 21:42:18.810424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.810432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:18.223 [2024-12-10 21:42:18.810454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.810468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.223 [2024-12-10 21:42:18.810503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.223 [2024-12-10 21:42:18.810603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.223 [2024-12-10 21:42:18.810615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.223 [2024-12-10 21:42:18.810619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810623] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=4096, cccid=0 00:15:18.223 [2024-12-10 21:42:18.810629] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca1740) on tqpair(0x1c3d750): expected_datao=0, payload_size=4096 00:15:18.223 [2024-12-10 21:42:18.810634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810643] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810648] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.223 [2024-12-10 21:42:18.810664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.223 [2024-12-10 21:42:18.810668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.223 [2024-12-10 21:42:18.810682] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:18.223 [2024-12-10 21:42:18.810687] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:18.223 [2024-12-10 21:42:18.810692] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:18.223 [2024-12-10 21:42:18.810697] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:18.223 [2024-12-10 21:42:18.810702] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:18.223 [2024-12-10 21:42:18.810708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.810717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.810725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:18.223 [2024-12-10 21:42:18.810763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.223 [2024-12-10 21:42:18.810808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.223 [2024-12-10 21:42:18.810815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.223 [2024-12-10 21:42:18.810819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.223 [2024-12-10 21:42:18.810831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.223 [2024-12-10 21:42:18.810853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.223 [2024-12-10 21:42:18.810875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.223 [2024-12-10 21:42:18.810895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.223 [2024-12-10 21:42:18.810915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.810929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.810938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.810942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.810950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.223 [2024-12-10 21:42:18.810971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1740, cid 0, qid 0 00:15:18.223 [2024-12-10 21:42:18.810978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca18c0, cid 1, qid 0 00:15:18.223 [2024-12-10 21:42:18.810983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1a40, cid 2, qid 0 00:15:18.223 [2024-12-10 21:42:18.810988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.223 [2024-12-10 21:42:18.810993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1d40, cid 4, qid 0 00:15:18.223 [2024-12-10 21:42:18.811091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.223 [2024-12-10 21:42:18.811103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.223 [2024-12-10 21:42:18.811107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.811112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1d40) on tqpair=0x1c3d750 00:15:18.223 [2024-12-10 21:42:18.811118] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:18.223 [2024-12-10 21:42:18.811124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.811138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.811146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.811154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.811158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.811162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.811170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:18.223 [2024-12-10 21:42:18.811192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1d40, cid 4, qid 0 00:15:18.223 [2024-12-10 21:42:18.811243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.223 [2024-12-10 21:42:18.811250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.223 [2024-12-10 21:42:18.811254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.811259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1d40) on tqpair=0x1c3d750 00:15:18.223 [2024-12-10 21:42:18.811323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.811334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:18.223 [2024-12-10 21:42:18.811343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.223 [2024-12-10 21:42:18.811347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3d750) 00:15:18.223 [2024-12-10 21:42:18.811355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.223 [2024-12-10 21:42:18.811373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1d40, cid 4, qid 0 00:15:18.223 [2024-12-10 21:42:18.811433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.223 [2024-12-10 21:42:18.811460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.223 [2024-12-10 21:42:18.811466] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811470] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=4096, cccid=4 00:15:18.224 [2024-12-10 21:42:18.811475] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca1d40) on tqpair(0x1c3d750): expected_datao=0, payload_size=4096 00:15:18.224 [2024-12-10 21:42:18.811480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811488] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811493] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.811508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.811512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1d40) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.811538] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:18.224 [2024-12-10 21:42:18.811549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.811561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.811569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.811582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.224 [2024-12-10 21:42:18.811605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1d40, cid 4, qid 0 00:15:18.224 [2024-12-10 21:42:18.811745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.224 [2024-12-10 21:42:18.811761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.224 [2024-12-10 21:42:18.811766] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811770] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=4096, cccid=4 00:15:18.224 [2024-12-10 21:42:18.811775] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca1d40) on tqpair(0x1c3d750): expected_datao=0, payload_size=4096 00:15:18.224 [2024-12-10 21:42:18.811780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811792] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.811808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.811811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1d40) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.811832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.811844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.811853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.811865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.224 [2024-12-10 21:42:18.811887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1d40, cid 4, qid 0 00:15:18.224 [2024-12-10 21:42:18.811945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.224 [2024-12-10 21:42:18.811957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.224 [2024-12-10 21:42:18.811962] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811966] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=4096, cccid=4 00:15:18.224 [2024-12-10 21:42:18.811971] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca1d40) on tqpair(0x1c3d750): expected_datao=0, payload_size=4096 00:15:18.224 [2024-12-10 21:42:18.811976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811983] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811987] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.811996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.812003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.812006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1d40) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.812020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.812029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.812043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.812053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.812059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.812065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.812071] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:18.224 [2024-12-10 21:42:18.812076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:18.224 [2024-12-10 21:42:18.812082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:18.224 [2024-12-10 21:42:18.812100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.812113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.224 [2024-12-10 21:42:18.812121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.812135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.224 [2024-12-10 21:42:18.812161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1d40, cid 4, qid 0 00:15:18.224 [2024-12-10 21:42:18.812168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1ec0, cid 5, qid 0 00:15:18.224 [2024-12-10 21:42:18.812230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.812236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.812240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1d40) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.812252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.812258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.812262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1ec0) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.812277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.812289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.224 [2024-12-10 21:42:18.812307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1ec0, cid 5, qid 0 00:15:18.224 [2024-12-10 21:42:18.812350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.812356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.812360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1ec0) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.812376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.812388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.224 [2024-12-10 21:42:18.812404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1ec0, cid 5, qid 0 00:15:18.224 [2024-12-10 21:42:18.812464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.812472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.812476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1ec0) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.812492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.812504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.224 [2024-12-10 21:42:18.812524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1ec0, cid 5, qid 0 00:15:18.224 [2024-12-10 21:42:18.812571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.224 [2024-12-10 21:42:18.812578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.224 [2024-12-10 21:42:18.812582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1ec0) on tqpair=0x1c3d750 00:15:18.224 [2024-12-10 21:42:18.812606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3d750) 00:15:18.224 [2024-12-10 21:42:18.812619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.224 [2024-12-10 21:42:18.812628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.224 [2024-12-10 21:42:18.812632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3d750) 00:15:18.225 [2024-12-10 21:42:18.812639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.225 [2024-12-10 21:42:18.812647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c3d750) 00:15:18.225 [2024-12-10 21:42:18.812658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.225 [2024-12-10 21:42:18.812667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c3d750) 00:15:18.225 [2024-12-10 21:42:18.812678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.225 [2024-12-10 21:42:18.812698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1ec0, cid 5, qid 0 00:15:18.225 [2024-12-10 21:42:18.812705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1d40, cid 4, qid 0 00:15:18.225 [2024-12-10 21:42:18.812711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca2040, cid 6, qid 0 00:15:18.225 [2024-12-10 21:42:18.812716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca21c0, cid 7, qid 0 00:15:18.225 [2024-12-10 21:42:18.812851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.225 [2024-12-10 21:42:18.812867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.225 [2024-12-10 21:42:18.812872] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812876] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=8192, cccid=5 00:15:18.225 [2024-12-10 21:42:18.812881] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca1ec0) on tqpair(0x1c3d750): expected_datao=0, payload_size=8192 00:15:18.225 [2024-12-10 21:42:18.812886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812905] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812910] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.225 [2024-12-10 21:42:18.812923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.225 [2024-12-10 21:42:18.812926] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812930] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=512, cccid=4 00:15:18.225 [2024-12-10 21:42:18.812935] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca1d40) on tqpair(0x1c3d750): expected_datao=0, payload_size=512 00:15:18.225 [2024-12-10 21:42:18.812940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812947] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812950] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.225 [2024-12-10 21:42:18.812963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.225 [2024-12-10 21:42:18.812966] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812970] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=512, cccid=6 00:15:18.225 [2024-12-10 21:42:18.812975] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca2040) on tqpair(0x1c3d750): expected_datao=0, payload_size=512 00:15:18.225 [2024-12-10 21:42:18.812979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812986] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812990] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.812996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:18.225 [2024-12-10 21:42:18.813002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:18.225 [2024-12-10 21:42:18.813005] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813009] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3d750): datao=0, datal=4096, cccid=7 00:15:18.225 [2024-12-10 21:42:18.813015] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca21c0) on tqpair(0x1c3d750): expected_datao=0, payload_size=4096 00:15:18.225 [2024-12-10 21:42:18.813019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813026] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813030] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.225 [2024-12-10 21:42:18.813042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.225 [2024-12-10 21:42:18.813046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1ec0) on tqpair=0x1c3d750 00:15:18.225 [2024-12-10 21:42:18.813067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.225 [2024-12-10 21:42:18.813074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.225 [2024-12-10 21:42:18.813078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1d40) on tqpair=0x1c3d750 00:15:18.225 [2024-12-10 21:42:18.813094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.225 [2024-12-10 21:42:18.813101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.225 [2024-12-10 21:42:18.813105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca2040) on tqpair=0x1c3d750 00:15:18.225 [2024-12-10 21:42:18.813117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.225 [2024-12-10 21:42:18.813123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.225 [2024-12-10 21:42:18.813127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.225 [2024-12-10 21:42:18.813131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca21c0) on tqpair=0x1c3d750 00:15:18.225 ===================================================== 00:15:18.225 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:18.225 ===================================================== 00:15:18.225 Controller Capabilities/Features 00:15:18.225 ================================ 00:15:18.225 Vendor ID: 8086 00:15:18.225 Subsystem Vendor ID: 8086 00:15:18.225 Serial Number: SPDK00000000000001 00:15:18.225 Model Number: SPDK bdev Controller 00:15:18.225 Firmware Version: 25.01 00:15:18.225 Recommended Arb Burst: 6 00:15:18.225 IEEE OUI Identifier: e4 d2 5c 00:15:18.225 Multi-path I/O 00:15:18.225 May have multiple subsystem ports: Yes 00:15:18.225 May have multiple controllers: Yes 00:15:18.225 Associated with SR-IOV VF: No 00:15:18.225 Max Data Transfer Size: 131072 00:15:18.225 Max Number of Namespaces: 32 00:15:18.225 Max Number of I/O Queues: 127 00:15:18.225 NVMe Specification Version (VS): 1.3 00:15:18.225 NVMe Specification Version (Identify): 1.3 00:15:18.225 Maximum Queue Entries: 128 00:15:18.225 Contiguous Queues Required: Yes 00:15:18.225 Arbitration Mechanisms Supported 00:15:18.225 Weighted Round Robin: Not Supported 00:15:18.225 Vendor Specific: Not Supported 00:15:18.225 Reset Timeout: 15000 ms 00:15:18.225 Doorbell Stride: 4 bytes 00:15:18.225 NVM Subsystem Reset: Not Supported 00:15:18.225 Command Sets Supported 00:15:18.225 NVM Command Set: Supported 00:15:18.225 Boot Partition: Not Supported 00:15:18.225 Memory Page Size Minimum: 4096 bytes 00:15:18.225 Memory Page Size Maximum: 4096 bytes 00:15:18.225 Persistent Memory Region: Not Supported 00:15:18.225 Optional Asynchronous Events Supported 00:15:18.225 Namespace Attribute Notices: Supported 00:15:18.225 Firmware Activation Notices: Not Supported 00:15:18.225 ANA Change Notices: Not Supported 00:15:18.225 PLE Aggregate Log Change Notices: Not Supported 00:15:18.225 LBA Status Info Alert Notices: Not Supported 00:15:18.225 EGE Aggregate Log Change Notices: Not Supported 00:15:18.225 Normal NVM Subsystem Shutdown event: Not Supported 00:15:18.225 Zone Descriptor Change Notices: Not Supported 00:15:18.225 Discovery Log Change Notices: Not Supported 00:15:18.225 Controller Attributes 00:15:18.225 128-bit Host Identifier: Supported 00:15:18.225 Non-Operational Permissive Mode: Not Supported 00:15:18.225 NVM Sets: Not Supported 00:15:18.225 Read Recovery Levels: Not Supported 00:15:18.225 Endurance Groups: Not Supported 00:15:18.225 Predictable Latency Mode: Not Supported 00:15:18.225 Traffic Based Keep ALive: Not Supported 00:15:18.225 Namespace Granularity: Not Supported 00:15:18.225 SQ Associations: Not Supported 00:15:18.225 UUID List: Not Supported 00:15:18.225 Multi-Domain Subsystem: Not Supported 00:15:18.225 Fixed Capacity Management: Not Supported 00:15:18.225 Variable Capacity Management: Not Supported 00:15:18.225 Delete Endurance Group: Not Supported 00:15:18.225 Delete NVM Set: Not Supported 00:15:18.225 Extended LBA Formats Supported: Not Supported 00:15:18.225 Flexible Data Placement Supported: Not Supported 00:15:18.225 00:15:18.225 Controller Memory Buffer Support 00:15:18.225 ================================ 00:15:18.225 Supported: No 00:15:18.225 00:15:18.225 Persistent Memory Region Support 00:15:18.225 ================================ 00:15:18.225 Supported: No 00:15:18.225 00:15:18.225 Admin Command Set Attributes 00:15:18.225 ============================ 00:15:18.225 Security Send/Receive: Not Supported 00:15:18.225 Format NVM: Not Supported 00:15:18.225 Firmware Activate/Download: Not Supported 00:15:18.225 Namespace Management: Not Supported 00:15:18.225 Device Self-Test: Not Supported 00:15:18.225 Directives: Not Supported 00:15:18.225 NVMe-MI: Not Supported 00:15:18.225 Virtualization Management: Not Supported 00:15:18.225 Doorbell Buffer Config: Not Supported 00:15:18.225 Get LBA Status Capability: Not Supported 00:15:18.225 Command & Feature Lockdown Capability: Not Supported 00:15:18.225 Abort Command Limit: 4 00:15:18.225 Async Event Request Limit: 4 00:15:18.225 Number of Firmware Slots: N/A 00:15:18.225 Firmware Slot 1 Read-Only: N/A 00:15:18.225 Firmware Activation Without Reset: N/A 00:15:18.225 Multiple Update Detection Support: N/A 00:15:18.226 Firmware Update Granularity: No Information Provided 00:15:18.226 Per-Namespace SMART Log: No 00:15:18.226 Asymmetric Namespace Access Log Page: Not Supported 00:15:18.226 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:18.226 Command Effects Log Page: Supported 00:15:18.226 Get Log Page Extended Data: Supported 00:15:18.226 Telemetry Log Pages: Not Supported 00:15:18.226 Persistent Event Log Pages: Not Supported 00:15:18.226 Supported Log Pages Log Page: May Support 00:15:18.226 Commands Supported & Effects Log Page: Not Supported 00:15:18.226 Feature Identifiers & Effects Log Page:May Support 00:15:18.226 NVMe-MI Commands & Effects Log Page: May Support 00:15:18.226 Data Area 4 for Telemetry Log: Not Supported 00:15:18.226 Error Log Page Entries Supported: 128 00:15:18.226 Keep Alive: Supported 00:15:18.226 Keep Alive Granularity: 10000 ms 00:15:18.226 00:15:18.226 NVM Command Set Attributes 00:15:18.226 ========================== 00:15:18.226 Submission Queue Entry Size 00:15:18.226 Max: 64 00:15:18.226 Min: 64 00:15:18.226 Completion Queue Entry Size 00:15:18.226 Max: 16 00:15:18.226 Min: 16 00:15:18.226 Number of Namespaces: 32 00:15:18.226 Compare Command: Supported 00:15:18.226 Write Uncorrectable Command: Not Supported 00:15:18.226 Dataset Management Command: Supported 00:15:18.226 Write Zeroes Command: Supported 00:15:18.226 Set Features Save Field: Not Supported 00:15:18.226 Reservations: Supported 00:15:18.226 Timestamp: Not Supported 00:15:18.226 Copy: Supported 00:15:18.226 Volatile Write Cache: Present 00:15:18.226 Atomic Write Unit (Normal): 1 00:15:18.226 Atomic Write Unit (PFail): 1 00:15:18.226 Atomic Compare & Write Unit: 1 00:15:18.226 Fused Compare & Write: Supported 00:15:18.226 Scatter-Gather List 00:15:18.226 SGL Command Set: Supported 00:15:18.226 SGL Keyed: Supported 00:15:18.226 SGL Bit Bucket Descriptor: Not Supported 00:15:18.226 SGL Metadata Pointer: Not Supported 00:15:18.226 Oversized SGL: Not Supported 00:15:18.226 SGL Metadata Address: Not Supported 00:15:18.226 SGL Offset: Supported 00:15:18.226 Transport SGL Data Block: Not Supported 00:15:18.226 Replay Protected Memory Block: Not Supported 00:15:18.226 00:15:18.226 Firmware Slot Information 00:15:18.226 ========================= 00:15:18.226 Active slot: 1 00:15:18.226 Slot 1 Firmware Revision: 25.01 00:15:18.226 00:15:18.226 00:15:18.226 Commands Supported and Effects 00:15:18.226 ============================== 00:15:18.226 Admin Commands 00:15:18.226 -------------- 00:15:18.226 Get Log Page (02h): Supported 00:15:18.226 Identify (06h): Supported 00:15:18.226 Abort (08h): Supported 00:15:18.226 Set Features (09h): Supported 00:15:18.226 Get Features (0Ah): Supported 00:15:18.226 Asynchronous Event Request (0Ch): Supported 00:15:18.226 Keep Alive (18h): Supported 00:15:18.226 I/O Commands 00:15:18.226 ------------ 00:15:18.226 Flush (00h): Supported LBA-Change 00:15:18.226 Write (01h): Supported LBA-Change 00:15:18.226 Read (02h): Supported 00:15:18.226 Compare (05h): Supported 00:15:18.226 Write Zeroes (08h): Supported LBA-Change 00:15:18.226 Dataset Management (09h): Supported LBA-Change 00:15:18.226 Copy (19h): Supported LBA-Change 00:15:18.226 00:15:18.226 Error Log 00:15:18.226 ========= 00:15:18.226 00:15:18.226 Arbitration 00:15:18.226 =========== 00:15:18.226 Arbitration Burst: 1 00:15:18.226 00:15:18.226 Power Management 00:15:18.226 ================ 00:15:18.226 Number of Power States: 1 00:15:18.226 Current Power State: Power State #0 00:15:18.226 Power State #0: 00:15:18.226 Max Power: 0.00 W 00:15:18.226 Non-Operational State: Operational 00:15:18.226 Entry Latency: Not Reported 00:15:18.226 Exit Latency: Not Reported 00:15:18.226 Relative Read Throughput: 0 00:15:18.226 Relative Read Latency: 0 00:15:18.226 Relative Write Throughput: 0 00:15:18.226 Relative Write Latency: 0 00:15:18.226 Idle Power: Not Reported 00:15:18.226 Active Power: Not Reported 00:15:18.226 Non-Operational Permissive Mode: Not Supported 00:15:18.226 00:15:18.226 Health Information 00:15:18.226 ================== 00:15:18.226 Critical Warnings: 00:15:18.226 Available Spare Space: OK 00:15:18.226 Temperature: OK 00:15:18.226 Device Reliability: OK 00:15:18.226 Read Only: No 00:15:18.226 Volatile Memory Backup: OK 00:15:18.226 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:18.226 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:18.226 Available Spare: 0% 00:15:18.226 Available Spare Threshold: 0% 00:15:18.226 Life Percentage Used:[2024-12-10 21:42:18.813238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.226 [2024-12-10 21:42:18.813245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c3d750) 00:15:18.226 [2024-12-10 21:42:18.813254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.226 [2024-12-10 21:42:18.813277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca21c0, cid 7, qid 0 00:15:18.226 [2024-12-10 21:42:18.813323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.226 [2024-12-10 21:42:18.813330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.226 [2024-12-10 21:42:18.813334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.226 [2024-12-10 21:42:18.813338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca21c0) on tqpair=0x1c3d750 00:15:18.226 [2024-12-10 21:42:18.813378] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:18.226 [2024-12-10 21:42:18.813390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1740) on tqpair=0x1c3d750 00:15:18.226 [2024-12-10 21:42:18.813397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.226 [2024-12-10 21:42:18.813403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca18c0) on tqpair=0x1c3d750 00:15:18.226 [2024-12-10 21:42:18.813408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.226 [2024-12-10 21:42:18.813414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1a40) on tqpair=0x1c3d750 00:15:18.226 [2024-12-10 21:42:18.813419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.226 [2024-12-10 21:42:18.813424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.226 [2024-12-10 21:42:18.813429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.226 [2024-12-10 21:42:18.813439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.226 [2024-12-10 21:42:18.817465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.226 [2024-12-10 21:42:18.817488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.226 [2024-12-10 21:42:18.817499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.226 [2024-12-10 21:42:18.817531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.226 [2024-12-10 21:42:18.817583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.226 [2024-12-10 21:42:18.817591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.226 [2024-12-10 21:42:18.817595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.226 [2024-12-10 21:42:18.817600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.226 [2024-12-10 21:42:18.817609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.226 [2024-12-10 21:42:18.817614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.226 [2024-12-10 21:42:18.817618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.226 [2024-12-10 21:42:18.817626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.226 [2024-12-10 21:42:18.817648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.226 [2024-12-10 21:42:18.817716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.226 [2024-12-10 21:42:18.817723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.226 [2024-12-10 21:42:18.817727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.817737] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:18.227 [2024-12-10 21:42:18.817742] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:18.227 [2024-12-10 21:42:18.817753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.817770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.817787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.817830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.817837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.817840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.817856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.817873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.817891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.817933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.817940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.817944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.817959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.817968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.817975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.817993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.818912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.818919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.818923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.227 [2024-12-10 21:42:18.818938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.227 [2024-12-10 21:42:18.818946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.227 [2024-12-10 21:42:18.818954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.227 [2024-12-10 21:42:18.818971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.227 [2024-12-10 21:42:18.819025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.227 [2024-12-10 21:42:18.819038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.227 [2024-12-10 21:42:18.819042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.819913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.819924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.819928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.819944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.819953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.819961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.819979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.820022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.820033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.820037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.820053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.820069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.820087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.820133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.820143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.820148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.820163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.820180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.820197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.820240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.820250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.820255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.820270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.228 [2024-12-10 21:42:18.820287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.228 [2024-12-10 21:42:18.820304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.228 [2024-12-10 21:42:18.820350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.228 [2024-12-10 21:42:18.820361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.228 [2024-12-10 21:42:18.820365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.228 [2024-12-10 21:42:18.820369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.228 [2024-12-10 21:42:18.820381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.820397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.820415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.820470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.820488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.820493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.820509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.820526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.820546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.820589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.820597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.820601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.820616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.820632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.820649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.820695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.820706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.820711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.820726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.820743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.820760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.820806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.820817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.820821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.820837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.820853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.820871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.820917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.820924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.820928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.820943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.820952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.820959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.820976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.821016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.821027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.821031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.821047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.821064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.821081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.821127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.821134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.821137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.821153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.821169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.821186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.821229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.821239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.821244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.821259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.821276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.821294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.821339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.821346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.821350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.821365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.821374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.821381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.821398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.825451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.825475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.825481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.825486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.825502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.825508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.825512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3d750) 00:15:18.229 [2024-12-10 21:42:18.825521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.229 [2024-12-10 21:42:18.825547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca1bc0, cid 3, qid 0 00:15:18.229 [2024-12-10 21:42:18.825598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:18.229 [2024-12-10 21:42:18.825606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:18.229 [2024-12-10 21:42:18.825609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:18.229 [2024-12-10 21:42:18.825614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca1bc0) on tqpair=0x1c3d750 00:15:18.229 [2024-12-10 21:42:18.825623] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:15:18.229 0% 00:15:18.229 Data Units Read: 0 00:15:18.229 Data Units Written: 0 00:15:18.229 Host Read Commands: 0 00:15:18.229 Host Write Commands: 0 00:15:18.229 Controller Busy Time: 0 minutes 00:15:18.229 Power Cycles: 0 00:15:18.229 Power On Hours: 0 hours 00:15:18.229 Unsafe Shutdowns: 0 00:15:18.229 Unrecoverable Media Errors: 0 00:15:18.229 Lifetime Error Log Entries: 0 00:15:18.229 Warning Temperature Time: 0 minutes 00:15:18.229 Critical Temperature Time: 0 minutes 00:15:18.229 00:15:18.229 Number of Queues 00:15:18.229 ================ 00:15:18.229 Number of I/O Submission Queues: 127 00:15:18.229 Number of I/O Completion Queues: 127 00:15:18.229 00:15:18.229 Active Namespaces 00:15:18.229 ================= 00:15:18.229 Namespace ID:1 00:15:18.229 Error Recovery Timeout: Unlimited 00:15:18.229 Command Set Identifier: NVM (00h) 00:15:18.230 Deallocate: Supported 00:15:18.230 Deallocated/Unwritten Error: Not Supported 00:15:18.230 Deallocated Read Value: Unknown 00:15:18.230 Deallocate in Write Zeroes: Not Supported 00:15:18.230 Deallocated Guard Field: 0xFFFF 00:15:18.230 Flush: Supported 00:15:18.230 Reservation: Supported 00:15:18.230 Namespace Sharing Capabilities: Multiple Controllers 00:15:18.230 Size (in LBAs): 131072 (0GiB) 00:15:18.230 Capacity (in LBAs): 131072 (0GiB) 00:15:18.230 Utilization (in LBAs): 131072 (0GiB) 00:15:18.230 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:18.230 EUI64: ABCDEF0123456789 00:15:18.230 UUID: c0f1ae55-8661-42dc-bc0c-e8ef32a78290 00:15:18.230 Thin Provisioning: Not Supported 00:15:18.230 Per-NS Atomic Units: Yes 00:15:18.230 Atomic Boundary Size (Normal): 0 00:15:18.230 Atomic Boundary Size (PFail): 0 00:15:18.230 Atomic Boundary Offset: 0 00:15:18.230 Maximum Single Source Range Length: 65535 00:15:18.230 Maximum Copy Length: 65535 00:15:18.230 Maximum Source Range Count: 1 00:15:18.230 NGUID/EUI64 Never Reused: No 00:15:18.230 Namespace Write Protected: No 00:15:18.230 Number of LBA Formats: 1 00:15:18.230 Current LBA Format: LBA Format #00 00:15:18.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:18.230 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:18.230 rmmod nvme_tcp 00:15:18.230 rmmod nvme_fabrics 00:15:18.230 rmmod nvme_keyring 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74275 ']' 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74275 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74275 ']' 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74275 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.230 21:42:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74275 00:15:18.488 killing process with pid 74275 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74275' 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74275 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74275 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:18.488 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:18.746 00:15:18.746 real 0m2.159s 00:15:18.746 user 0m4.449s 00:15:18.746 sys 0m0.644s 00:15:18.746 ************************************ 00:15:18.746 END TEST nvmf_identify 00:15:18.746 ************************************ 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 ************************************ 00:15:18.746 START TEST nvmf_perf 00:15:18.746 ************************************ 00:15:18.746 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:18.746 * Looking for test storage... 00:15:19.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:19.005 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:19.005 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:15:19.005 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:19.005 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.006 --rc genhtml_branch_coverage=1 00:15:19.006 --rc genhtml_function_coverage=1 00:15:19.006 --rc genhtml_legend=1 00:15:19.006 --rc geninfo_all_blocks=1 00:15:19.006 --rc geninfo_unexecuted_blocks=1 00:15:19.006 00:15:19.006 ' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.006 --rc genhtml_branch_coverage=1 00:15:19.006 --rc genhtml_function_coverage=1 00:15:19.006 --rc genhtml_legend=1 00:15:19.006 --rc geninfo_all_blocks=1 00:15:19.006 --rc geninfo_unexecuted_blocks=1 00:15:19.006 00:15:19.006 ' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.006 --rc genhtml_branch_coverage=1 00:15:19.006 --rc genhtml_function_coverage=1 00:15:19.006 --rc genhtml_legend=1 00:15:19.006 --rc geninfo_all_blocks=1 00:15:19.006 --rc geninfo_unexecuted_blocks=1 00:15:19.006 00:15:19.006 ' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.006 --rc genhtml_branch_coverage=1 00:15:19.006 --rc genhtml_function_coverage=1 00:15:19.006 --rc genhtml_legend=1 00:15:19.006 --rc geninfo_all_blocks=1 00:15:19.006 --rc geninfo_unexecuted_blocks=1 00:15:19.006 00:15:19.006 ' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:19.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:19.006 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:19.007 Cannot find device "nvmf_init_br" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:19.007 Cannot find device "nvmf_init_br2" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:19.007 Cannot find device "nvmf_tgt_br" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.007 Cannot find device "nvmf_tgt_br2" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:19.007 Cannot find device "nvmf_init_br" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:19.007 Cannot find device "nvmf_init_br2" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:19.007 Cannot find device "nvmf_tgt_br" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:19.007 Cannot find device "nvmf_tgt_br2" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:19.007 Cannot find device "nvmf_br" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:19.007 Cannot find device "nvmf_init_if" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:19.007 Cannot find device "nvmf_init_if2" 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.007 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:19.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:15:19.265 00:15:19.265 --- 10.0.0.3 ping statistics --- 00:15:19.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.265 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:19.265 21:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:19.265 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:19.265 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:19.265 00:15:19.265 --- 10.0.0.4 ping statistics --- 00:15:19.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.266 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:19.266 00:15:19.266 --- 10.0.0.1 ping statistics --- 00:15:19.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.266 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:19.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:15:19.266 00:15:19.266 --- 10.0.0.2 ping statistics --- 00:15:19.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.266 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74524 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74524 00:15:19.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74524 ']' 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.266 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:19.524 [2024-12-10 21:42:20.108932] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:19.524 [2024-12-10 21:42:20.109192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.524 [2024-12-10 21:42:20.256215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.524 [2024-12-10 21:42:20.289676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.525 [2024-12-10 21:42:20.289732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.525 [2024-12-10 21:42:20.289750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.525 [2024-12-10 21:42:20.289763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.525 [2024-12-10 21:42:20.289774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.525 [2024-12-10 21:42:20.290596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.525 [2024-12-10 21:42:20.290846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.525 [2024-12-10 21:42:20.291258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.525 [2024-12-10 21:42:20.291276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.783 [2024-12-10 21:42:20.339064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:19.783 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:20.349 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:20.349 21:42:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:20.607 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:20.607 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.865 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:20.865 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:20.865 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:20.865 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:20.865 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:21.148 [2024-12-10 21:42:21.816601] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.148 21:42:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:21.449 21:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:21.449 21:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.707 21:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:21.707 21:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:21.965 21:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:22.223 [2024-12-10 21:42:22.966108] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:22.223 21:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:22.788 21:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:22.788 21:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:22.788 21:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:22.788 21:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:23.722 Initializing NVMe Controllers 00:15:23.722 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:23.722 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:23.722 Initialization complete. Launching workers. 00:15:23.722 ======================================================== 00:15:23.722 Latency(us) 00:15:23.722 Device Information : IOPS MiB/s Average min max 00:15:23.722 PCIE (0000:00:10.0) NSID 1 from core 0: 26657.53 104.13 1200.25 319.61 8459.37 00:15:23.722 ======================================================== 00:15:23.722 Total : 26657.53 104.13 1200.25 319.61 8459.37 00:15:23.722 00:15:23.722 21:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:25.114 Initializing NVMe Controllers 00:15:25.114 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.114 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:25.114 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:25.114 Initialization complete. Launching workers. 00:15:25.114 ======================================================== 00:15:25.114 Latency(us) 00:15:25.114 Device Information : IOPS MiB/s Average min max 00:15:25.114 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3317.91 12.96 300.99 110.51 4313.85 00:15:25.114 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.11 7941.04 12020.11 00:15:25.114 ======================================================== 00:15:25.114 Total : 3441.91 13.44 582.96 110.51 12020.11 00:15:25.114 00:15:25.114 21:42:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:26.488 Initializing NVMe Controllers 00:15:26.488 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:26.488 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:26.488 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:26.488 Initialization complete. Launching workers. 00:15:26.488 ======================================================== 00:15:26.488 Latency(us) 00:15:26.488 Device Information : IOPS MiB/s Average min max 00:15:26.488 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8158.67 31.87 3924.42 536.95 8014.67 00:15:26.488 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4019.38 15.70 7998.84 5073.01 11027.10 00:15:26.488 ======================================================== 00:15:26.488 Total : 12178.05 47.57 5269.18 536.95 11027.10 00:15:26.488 00:15:26.488 21:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:26.488 21:42:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:29.019 Initializing NVMe Controllers 00:15:29.019 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.019 Controller IO queue size 128, less than required. 00:15:29.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.019 Controller IO queue size 128, less than required. 00:15:29.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.019 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:29.019 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:29.019 Initialization complete. Launching workers. 00:15:29.019 ======================================================== 00:15:29.019 Latency(us) 00:15:29.019 Device Information : IOPS MiB/s Average min max 00:15:29.019 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1589.81 397.45 82114.00 44931.06 204493.76 00:15:29.019 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 626.92 156.73 209689.45 76692.28 364386.25 00:15:29.019 ======================================================== 00:15:29.019 Total : 2216.73 554.18 118194.20 44931.06 364386.25 00:15:29.019 00:15:29.019 21:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:29.278 Initializing NVMe Controllers 00:15:29.278 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.278 Controller IO queue size 128, less than required. 00:15:29.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.278 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:29.278 Controller IO queue size 128, less than required. 00:15:29.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.278 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:29.278 WARNING: Some requested NVMe devices were skipped 00:15:29.278 No valid NVMe controllers or AIO or URING devices found 00:15:29.278 21:42:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:31.809 Initializing NVMe Controllers 00:15:31.809 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.809 Controller IO queue size 128, less than required. 00:15:31.809 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.809 Controller IO queue size 128, less than required. 00:15:31.809 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.809 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.809 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:31.809 Initialization complete. Launching workers. 00:15:31.809 00:15:31.809 ==================== 00:15:31.809 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:31.809 TCP transport: 00:15:31.809 polls: 10630 00:15:31.809 idle_polls: 6287 00:15:31.809 sock_completions: 4343 00:15:31.809 nvme_completions: 7045 00:15:31.809 submitted_requests: 10472 00:15:31.809 queued_requests: 1 00:15:31.809 00:15:31.809 ==================== 00:15:31.809 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:31.809 TCP transport: 00:15:31.809 polls: 10698 00:15:31.809 idle_polls: 6049 00:15:31.809 sock_completions: 4649 00:15:31.809 nvme_completions: 6917 00:15:31.809 submitted_requests: 10282 00:15:31.809 queued_requests: 1 00:15:31.809 ======================================================== 00:15:31.809 Latency(us) 00:15:31.809 Device Information : IOPS MiB/s Average min max 00:15:31.809 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1755.88 438.97 74292.04 34818.91 134932.83 00:15:31.809 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1723.97 430.99 75545.61 27333.72 122245.30 00:15:31.809 ======================================================== 00:15:31.809 Total : 3479.85 869.96 74913.07 27333.72 134932.83 00:15:31.809 00:15:31.809 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:31.809 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.118 rmmod nvme_tcp 00:15:32.118 rmmod nvme_fabrics 00:15:32.118 rmmod nvme_keyring 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74524 ']' 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74524 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74524 ']' 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74524 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.118 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74524 00:15:32.383 killing process with pid 74524 00:15:32.383 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.383 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.383 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74524' 00:15:32.383 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74524 00:15:32.383 21:42:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74524 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.949 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:33.208 ************************************ 00:15:33.208 END TEST nvmf_perf 00:15:33.208 ************************************ 00:15:33.208 00:15:33.208 real 0m14.329s 00:15:33.208 user 0m51.603s 00:15:33.208 sys 0m3.996s 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.208 ************************************ 00:15:33.208 START TEST nvmf_fio_host 00:15:33.208 ************************************ 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:33.208 * Looking for test storage... 00:15:33.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:33.208 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:33.468 21:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.468 --rc genhtml_branch_coverage=1 00:15:33.468 --rc genhtml_function_coverage=1 00:15:33.468 --rc genhtml_legend=1 00:15:33.468 --rc geninfo_all_blocks=1 00:15:33.468 --rc geninfo_unexecuted_blocks=1 00:15:33.468 00:15:33.468 ' 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.468 --rc genhtml_branch_coverage=1 00:15:33.468 --rc genhtml_function_coverage=1 00:15:33.468 --rc genhtml_legend=1 00:15:33.468 --rc geninfo_all_blocks=1 00:15:33.468 --rc geninfo_unexecuted_blocks=1 00:15:33.468 00:15:33.468 ' 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.468 --rc genhtml_branch_coverage=1 00:15:33.468 --rc genhtml_function_coverage=1 00:15:33.468 --rc genhtml_legend=1 00:15:33.468 --rc geninfo_all_blocks=1 00:15:33.468 --rc geninfo_unexecuted_blocks=1 00:15:33.468 00:15:33.468 ' 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:33.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.468 --rc genhtml_branch_coverage=1 00:15:33.468 --rc genhtml_function_coverage=1 00:15:33.468 --rc genhtml_legend=1 00:15:33.468 --rc geninfo_all_blocks=1 00:15:33.468 --rc geninfo_unexecuted_blocks=1 00:15:33.468 00:15:33.468 ' 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.468 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:33.469 Cannot find device "nvmf_init_br" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:33.469 Cannot find device "nvmf_init_br2" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:33.469 Cannot find device "nvmf_tgt_br" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.469 Cannot find device "nvmf_tgt_br2" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:33.469 Cannot find device "nvmf_init_br" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:33.469 Cannot find device "nvmf_init_br2" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:33.469 Cannot find device "nvmf_tgt_br" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:33.469 Cannot find device "nvmf_tgt_br2" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:33.469 Cannot find device "nvmf_br" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:33.469 Cannot find device "nvmf_init_if" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:33.469 Cannot find device "nvmf_init_if2" 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:33.469 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:33.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:15:33.729 00:15:33.729 --- 10.0.0.3 ping statistics --- 00:15:33.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.729 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:33.729 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:33.729 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:33.729 00:15:33.729 --- 10.0.0.4 ping statistics --- 00:15:33.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.729 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:33.729 00:15:33.729 --- 10.0.0.1 ping statistics --- 00:15:33.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.729 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:33.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:33.729 00:15:33.729 --- 10.0.0.2 ping statistics --- 00:15:33.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.729 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.729 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74985 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74985 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74985 ']' 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.730 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.730 [2024-12-10 21:42:34.489969] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:33.730 [2024-12-10 21:42:34.490235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.988 [2024-12-10 21:42:34.635513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.988 [2024-12-10 21:42:34.670039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.988 [2024-12-10 21:42:34.670290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.988 [2024-12-10 21:42:34.670542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.988 [2024-12-10 21:42:34.670774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.988 [2024-12-10 21:42:34.670882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.988 [2024-12-10 21:42:34.671909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.988 [2024-12-10 21:42:34.671982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.988 [2024-12-10 21:42:34.672055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.988 [2024-12-10 21:42:34.672059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.988 [2024-12-10 21:42:34.706419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.245 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.245 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:34.245 21:42:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.503 [2024-12-10 21:42:35.155712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.503 21:42:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:34.503 21:42:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.503 21:42:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.503 21:42:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:35.069 Malloc1 00:15:35.069 21:42:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.327 21:42:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.586 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:35.867 [2024-12-10 21:42:36.433762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.867 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:36.125 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:36.126 21:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:36.384 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:36.384 fio-3.35 00:15:36.384 Starting 1 thread 00:15:38.913 00:15:38.913 test: (groupid=0, jobs=1): err= 0: pid=75061: Tue Dec 10 21:42:39 2024 00:15:38.913 read: IOPS=8340, BW=32.6MiB/s (34.2MB/s)(65.4MiB/2007msec) 00:15:38.913 slat (usec): min=2, max=250, avg= 2.67, stdev= 2.64 00:15:38.913 clat (usec): min=2089, max=12925, avg=7989.49, stdev=616.70 00:15:38.913 lat (usec): min=2124, max=12928, avg=7992.17, stdev=616.41 00:15:38.913 clat percentiles (usec): 00:15:38.913 | 1.00th=[ 6783], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7504], 00:15:38.913 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:15:38.913 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:15:38.913 | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[11731], 99.95th=[12518], 00:15:38.913 | 99.99th=[12911] 00:15:38.913 bw ( KiB/s): min=32824, max=33976, per=99.95%, avg=33344.00, stdev=481.15, samples=4 00:15:38.914 iops : min= 8206, max= 8494, avg=8336.00, stdev=120.29, samples=4 00:15:38.914 write: IOPS=8342, BW=32.6MiB/s (34.2MB/s)(65.4MiB/2007msec); 0 zone resets 00:15:38.914 slat (usec): min=2, max=206, avg= 2.82, stdev= 1.91 00:15:38.914 clat (usec): min=1977, max=12790, avg=7286.82, stdev=575.74 00:15:38.914 lat (usec): min=1989, max=12792, avg=7289.64, stdev=575.58 00:15:38.914 clat percentiles (usec): 00:15:38.914 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6849], 00:15:38.914 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7373], 00:15:38.914 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8160], 00:15:38.914 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[11863], 99.95th=[12387], 00:15:38.914 | 99.99th=[12780] 00:15:38.914 bw ( KiB/s): min=32728, max=34192, per=99.99%, avg=33366.00, stdev=705.74, samples=4 00:15:38.914 iops : min= 8182, max= 8548, avg=8341.50, stdev=176.43, samples=4 00:15:38.914 lat (msec) : 2=0.01%, 4=0.16%, 10=99.49%, 20=0.34% 00:15:38.914 cpu : usr=70.44%, sys=22.38%, ctx=28, majf=0, minf=6 00:15:38.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:38.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.914 issued rwts: total=16739,16743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.914 00:15:38.914 Run status group 0 (all jobs): 00:15:38.914 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=65.4MiB (68.6MB), run=2007-2007msec 00:15:38.914 WRITE: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=65.4MiB (68.6MB), run=2007-2007msec 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:38.914 21:42:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:38.914 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:38.914 fio-3.35 00:15:38.914 Starting 1 thread 00:15:41.440 00:15:41.440 test: (groupid=0, jobs=1): err= 0: pid=75108: Tue Dec 10 21:42:41 2024 00:15:41.440 read: IOPS=7397, BW=116MiB/s (121MB/s)(232MiB/2007msec) 00:15:41.440 slat (usec): min=3, max=122, avg= 4.31, stdev= 2.53 00:15:41.440 clat (usec): min=2237, max=25719, avg=9502.12, stdev=3335.64 00:15:41.440 lat (usec): min=2240, max=25732, avg=9506.43, stdev=3336.23 00:15:41.440 clat percentiles (usec): 00:15:41.440 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6783], 00:15:41.440 | 30.00th=[ 7570], 40.00th=[ 8225], 50.00th=[ 8979], 60.00th=[ 9634], 00:15:41.440 | 70.00th=[10552], 80.00th=[11731], 90.00th=[14091], 95.00th=[16057], 00:15:41.440 | 99.00th=[20317], 99.50th=[20841], 99.90th=[24249], 99.95th=[24249], 00:15:41.440 | 99.99th=[25560] 00:15:41.440 bw ( KiB/s): min=54688, max=64896, per=51.96%, avg=61504.00, stdev=4664.18, samples=4 00:15:41.440 iops : min= 3418, max= 4056, avg=3844.00, stdev=291.51, samples=4 00:15:41.440 write: IOPS=4278, BW=66.9MiB/s (70.1MB/s)(126MiB/1884msec); 0 zone resets 00:15:41.440 slat (usec): min=37, max=488, avg=41.26, stdev= 7.83 00:15:41.440 clat (usec): min=3564, max=31472, avg=13427.69, stdev=3194.24 00:15:41.440 lat (usec): min=3605, max=31517, avg=13468.95, stdev=3196.31 00:15:41.440 clat percentiles (usec): 00:15:41.440 | 1.00th=[ 8291], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10945], 00:15:41.440 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13042], 60.00th=[13698], 00:15:41.440 | 70.00th=[14484], 80.00th=[15401], 90.00th=[16450], 95.00th=[17957], 00:15:41.440 | 99.00th=[27132], 99.50th=[28443], 99.90th=[31065], 99.95th=[31327], 00:15:41.440 | 99.99th=[31589] 00:15:41.440 bw ( KiB/s): min=55648, max=67616, per=93.98%, avg=64336.00, stdev=5815.35, samples=4 00:15:41.440 iops : min= 3478, max= 4226, avg=4021.00, stdev=363.46, samples=4 00:15:41.440 lat (msec) : 4=0.37%, 10=43.84%, 20=53.96%, 50=1.83% 00:15:41.440 cpu : usr=82.30%, sys=13.26%, ctx=9, majf=0, minf=9 00:15:41.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:41.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.440 issued rwts: total=14847,8061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.440 00:15:41.440 Run status group 0 (all jobs): 00:15:41.440 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=232MiB (243MB), run=2007-2007msec 00:15:41.440 WRITE: bw=66.9MiB/s (70.1MB/s), 66.9MiB/s-66.9MiB/s (70.1MB/s-70.1MB/s), io=126MiB (132MB), run=1884-1884msec 00:15:41.440 21:42:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:41.440 rmmod nvme_tcp 00:15:41.440 rmmod nvme_fabrics 00:15:41.440 rmmod nvme_keyring 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74985 ']' 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74985 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74985 ']' 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74985 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74985 00:15:41.440 killing process with pid 74985 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74985' 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74985 00:15:41.440 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74985 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:41.699 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:41.957 ************************************ 00:15:41.957 END TEST nvmf_fio_host 00:15:41.957 ************************************ 00:15:41.957 00:15:41.957 real 0m8.729s 00:15:41.957 user 0m35.249s 00:15:41.957 sys 0m2.315s 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.957 ************************************ 00:15:41.957 START TEST nvmf_failover 00:15:41.957 ************************************ 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:41.957 * Looking for test storage... 00:15:41.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:15:41.957 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:42.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.220 --rc genhtml_branch_coverage=1 00:15:42.220 --rc genhtml_function_coverage=1 00:15:42.220 --rc genhtml_legend=1 00:15:42.220 --rc geninfo_all_blocks=1 00:15:42.220 --rc geninfo_unexecuted_blocks=1 00:15:42.220 00:15:42.220 ' 00:15:42.220 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:42.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.220 --rc genhtml_branch_coverage=1 00:15:42.220 --rc genhtml_function_coverage=1 00:15:42.221 --rc genhtml_legend=1 00:15:42.221 --rc geninfo_all_blocks=1 00:15:42.221 --rc geninfo_unexecuted_blocks=1 00:15:42.221 00:15:42.221 ' 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.221 --rc genhtml_branch_coverage=1 00:15:42.221 --rc genhtml_function_coverage=1 00:15:42.221 --rc genhtml_legend=1 00:15:42.221 --rc geninfo_all_blocks=1 00:15:42.221 --rc geninfo_unexecuted_blocks=1 00:15:42.221 00:15:42.221 ' 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.221 --rc genhtml_branch_coverage=1 00:15:42.221 --rc genhtml_function_coverage=1 00:15:42.221 --rc genhtml_legend=1 00:15:42.221 --rc geninfo_all_blocks=1 00:15:42.221 --rc geninfo_unexecuted_blocks=1 00:15:42.221 00:15:42.221 ' 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.221 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.222 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.223 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.223 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.223 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.223 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.227 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.227 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.228 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:42.229 Cannot find device "nvmf_init_br" 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:42.229 Cannot find device "nvmf_init_br2" 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:42.229 Cannot find device "nvmf_tgt_br" 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.229 Cannot find device "nvmf_tgt_br2" 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:42.229 Cannot find device "nvmf_init_br" 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:42.229 Cannot find device "nvmf_init_br2" 00:15:42.229 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:42.230 Cannot find device "nvmf_tgt_br" 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:42.230 Cannot find device "nvmf_tgt_br2" 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:42.230 Cannot find device "nvmf_br" 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:42.230 Cannot find device "nvmf_init_if" 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:42.230 Cannot find device "nvmf_init_if2" 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.230 21:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:42.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:42.489 00:15:42.489 --- 10.0.0.3 ping statistics --- 00:15:42.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.489 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.489 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.489 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:15:42.489 00:15:42.489 --- 10.0.0.4 ping statistics --- 00:15:42.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.489 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:42.489 00:15:42.489 --- 10.0.0.1 ping statistics --- 00:15:42.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.489 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:42.489 00:15:42.489 --- 10.0.0.2 ping statistics --- 00:15:42.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.489 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75369 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75369 00:15:42.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75369 ']' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.489 21:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:42.747 [2024-12-10 21:42:43.294041] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:15:42.747 [2024-12-10 21:42:43.294134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.747 [2024-12-10 21:42:43.437473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:42.747 [2024-12-10 21:42:43.484322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.747 [2024-12-10 21:42:43.484397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.747 [2024-12-10 21:42:43.484415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.747 [2024-12-10 21:42:43.484427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.747 [2024-12-10 21:42:43.484438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.747 [2024-12-10 21:42:43.485332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.747 [2024-12-10 21:42:43.486127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.747 [2024-12-10 21:42:43.486146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.747 [2024-12-10 21:42:43.524323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.681 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.681 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:43.681 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.681 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.681 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:43.681 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.681 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:43.939 [2024-12-10 21:42:44.556902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.939 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:44.198 Malloc0 00:15:44.198 21:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.456 21:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:45.020 21:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:45.021 [2024-12-10 21:42:45.790899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:45.279 21:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:45.279 [2024-12-10 21:42:46.055273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:45.537 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:45.796 [2024-12-10 21:42:46.319433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:45.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75432 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75432 /var/tmp/bdevperf.sock 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75432 ']' 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.796 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:46.054 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.054 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:46.055 21:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:46.313 NVMe0n1 00:15:46.313 21:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:46.931 00:15:46.931 21:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75448 00:15:46.931 21:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:46.931 21:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:47.947 21:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:48.205 [2024-12-10 21:42:48.856722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248ecf0 is same with the state(6) to be set 00:15:48.205 [2024-12-10 21:42:48.856796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248ecf0 is same with the state(6) to be set 00:15:48.205 21:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:51.489 21:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:51.489 00:15:51.489 21:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:52.056 21:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:55.412 21:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:55.412 [2024-12-10 21:42:55.814018] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.412 21:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:56.346 21:42:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:56.605 21:42:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75448 00:16:01.891 { 00:16:01.891 "results": [ 00:16:01.891 { 00:16:01.891 "job": "NVMe0n1", 00:16:01.891 "core_mask": "0x1", 00:16:01.891 "workload": "verify", 00:16:01.891 "status": "finished", 00:16:01.891 "verify_range": { 00:16:01.891 "start": 0, 00:16:01.891 "length": 16384 00:16:01.891 }, 00:16:01.891 "queue_depth": 128, 00:16:01.891 "io_size": 4096, 00:16:01.891 "runtime": 15.009399, 00:16:01.891 "iops": 8574.8936383129, 00:16:01.891 "mibps": 33.49567827465977, 00:16:01.891 "io_failed": 3381, 00:16:01.891 "io_timeout": 0, 00:16:01.891 "avg_latency_us": 14510.313106092151, 00:16:01.891 "min_latency_us": 670.2545454545455, 00:16:01.891 "max_latency_us": 18588.392727272727 00:16:01.891 } 00:16:01.891 ], 00:16:01.891 "core_count": 1 00:16:01.891 } 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75432 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75432 ']' 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75432 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75432 00:16:02.157 killing process with pid 75432 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75432' 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75432 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75432 00:16:02.157 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:02.157 [2024-12-10 21:42:46.409824] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:16:02.157 [2024-12-10 21:42:46.409997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75432 ] 00:16:02.157 [2024-12-10 21:42:46.563817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.157 [2024-12-10 21:42:46.598845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.157 [2024-12-10 21:42:46.628139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.157 Running I/O for 15 seconds... 00:16:02.157 6563.00 IOPS, 25.64 MiB/s [2024-12-10T21:43:02.940Z] [2024-12-10 21:42:48.857363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.857414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.857506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.857564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.857621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.857677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.857746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.857798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.857846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.857893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.857941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.857967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.157 [2024-12-10 21:42:48.858034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.858097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.858155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.858209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.858265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.858320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.858375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.157 [2024-12-10 21:42:48.858428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.157 [2024-12-10 21:42:48.858478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.858969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.858998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.158 [2024-12-10 21:42:48.859432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.859967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.859995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.158 [2024-12-10 21:42:48.860790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.158 [2024-12-10 21:42:48.860816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.860845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.860870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.860897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.860922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.860965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.860992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.861045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.861099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.861153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.861208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.861265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.861957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.861985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.159 [2024-12-10 21:42:48.862641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.159 [2024-12-10 21:42:48.862823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.159 [2024-12-10 21:42:48.862839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.160 [2024-12-10 21:42:48.862854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.862869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.160 [2024-12-10 21:42:48.862884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.862908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.160 [2024-12-10 21:42:48.862924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.862939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.160 [2024-12-10 21:42:48.862953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.862969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.160 [2024-12-10 21:42:48.862983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.862999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.160 [2024-12-10 21:42:48.863013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.160 [2024-12-10 21:42:48.863063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172db60 is same with the state(6) to be set 00:16:02.160 [2024-12-10 21:42:48.863099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64200 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64208 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64472 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64480 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64488 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64496 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64504 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64512 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64520 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64528 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64536 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64544 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64552 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64560 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64568 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.863959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.863969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.863980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64576 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.863993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.864006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.864016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.864027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64584 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.864040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.160 [2024-12-10 21:42:48.864053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.160 [2024-12-10 21:42:48.864063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.160 [2024-12-10 21:42:48.864073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64592 len:8 PRP1 0x0 PRP2 0x0 00:16:02.160 [2024-12-10 21:42:48.864086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.161 [2024-12-10 21:42:48.864110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.161 [2024-12-10 21:42:48.864120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64600 len:8 PRP1 0x0 PRP2 0x0 00:16:02.161 [2024-12-10 21:42:48.864144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.161 [2024-12-10 21:42:48.864169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.161 [2024-12-10 21:42:48.864181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64608 len:8 PRP1 0x0 PRP2 0x0 00:16:02.161 [2024-12-10 21:42:48.864204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.161 [2024-12-10 21:42:48.864240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.161 [2024-12-10 21:42:48.864251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64616 len:8 PRP1 0x0 PRP2 0x0 00:16:02.161 [2024-12-10 21:42:48.864264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.161 [2024-12-10 21:42:48.864288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.161 [2024-12-10 21:42:48.864300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64624 len:8 PRP1 0x0 PRP2 0x0 00:16:02.161 [2024-12-10 21:42:48.864315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.161 [2024-12-10 21:42:48.864340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.161 [2024-12-10 21:42:48.864351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64632 len:8 PRP1 0x0 PRP2 0x0 00:16:02.161 [2024-12-10 21:42:48.864364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864424] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:02.161 [2024-12-10 21:42:48.864512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:48.864536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:48.864565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:48.864593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:48.864621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:48.864634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:02.161 [2024-12-10 21:42:48.864705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bec60 (9): Bad file descriptor 00:16:02.161 [2024-12-10 21:42:48.869327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:02.161 [2024-12-10 21:42:48.908866] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:02.161 6879.00 IOPS, 26.87 MiB/s [2024-12-10T21:43:02.944Z] 7564.67 IOPS, 29.55 MiB/s [2024-12-10T21:43:02.944Z] 7931.50 IOPS, 30.98 MiB/s [2024-12-10T21:43:02.944Z] [2024-12-10 21:42:52.536157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:52.536238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:52.536275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:52.536303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.161 [2024-12-10 21:42:52.536331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bec60 is same with the state(6) to be set 00:16:02.161 [2024-12-10 21:42:52.536424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.161 [2024-12-10 21:42:52.536712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.161 [2024-12-10 21:42:52.536742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.161 [2024-12-10 21:42:52.536772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.161 [2024-12-10 21:42:52.536809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.161 [2024-12-10 21:42:52.536857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.161 [2024-12-10 21:42:52.536890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.161 [2024-12-10 21:42:52.536918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.161 [2024-12-10 21:42:52.536932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.536947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.536961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.536978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.536991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.162 [2024-12-10 21:42:52.537852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.537974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.537993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.162 [2024-12-10 21:42:52.538334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.162 [2024-12-10 21:42:52.538358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.538401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.538980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.538996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.163 [2024-12-10 21:42:52.539552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.163 [2024-12-10 21:42:52.539815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.163 [2024-12-10 21:42:52.539842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.539863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.539893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.539920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.539936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.539952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.539978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.539997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:52.540776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.540967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.540989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.541015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.541033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.164 [2024-12-10 21:42:52.541047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.541087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.164 [2024-12-10 21:42:52.541102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.164 [2024-12-10 21:42:52.541114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63768 len:8 PRP1 0x0 PRP2 0x0 00:16:02.164 [2024-12-10 21:42:52.541127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:52.541184] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:02.164 [2024-12-10 21:42:52.541204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:02.164 [2024-12-10 21:42:52.545225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:02.164 [2024-12-10 21:42:52.545277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bec60 (9): Bad file descriptor 00:16:02.164 [2024-12-10 21:42:52.567537] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:02.164 8007.20 IOPS, 31.28 MiB/s [2024-12-10T21:43:02.947Z] 8054.00 IOPS, 31.46 MiB/s [2024-12-10T21:43:02.947Z] 8197.14 IOPS, 32.02 MiB/s [2024-12-10T21:43:02.947Z] 8310.50 IOPS, 32.46 MiB/s [2024-12-10T21:43:02.947Z] 8392.44 IOPS, 32.78 MiB/s [2024-12-10T21:43:02.947Z] [2024-12-10 21:42:57.203864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:57.203951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:57.203984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:57.204000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:57.204048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:57.204064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:57.204080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:57.204094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:57.204110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.164 [2024-12-10 21:42:57.204125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.164 [2024-12-10 21:42:57.204141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.204479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.165 [2024-12-10 21:42:57.204980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.204996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.165 [2024-12-10 21:42:57.205317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.165 [2024-12-10 21:42:57.205333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.205347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.205377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.205406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.205436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.205479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.166 [2024-12-10 21:42:57.205971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.205986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.166 [2024-12-10 21:42:57.206283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.166 [2024-12-10 21:42:57.206299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.206728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:02.167 [2024-12-10 21:42:57.206974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.206990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:02.167 [2024-12-10 21:42:57.207506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173f7a0 is same with the state(6) to be set 00:16:02.167 [2024-12-10 21:42:57.207541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.167 [2024-12-10 21:42:57.207553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.167 [2024-12-10 21:42:57.207563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7048 len:8 PRP1 0x0 PRP2 0x0 00:16:02.167 [2024-12-10 21:42:57.207577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.167 [2024-12-10 21:42:57.207601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.167 [2024-12-10 21:42:57.207612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7504 len:8 PRP1 0x0 PRP2 0x0 00:16:02.167 [2024-12-10 21:42:57.207626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.167 [2024-12-10 21:42:57.207640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.167 [2024-12-10 21:42:57.207650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.207660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7512 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.207684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.207699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.207709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.207720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.207732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.207746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.207756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.207767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7528 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.207780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.207794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.207804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.207814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7536 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.207828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.207841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.207851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.207862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7544 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.207876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.207890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.207900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.207913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.207927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.207941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.207951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.207971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7560 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.207984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.207998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7568 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7576 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7592 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7600 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7608 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:02.168 [2024-12-10 21:42:57.208347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:02.168 [2024-12-10 21:42:57.208358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7624 len:8 PRP1 0x0 PRP2 0x0 00:16:02.168 [2024-12-10 21:42:57.208371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208437] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:02.168 [2024-12-10 21:42:57.208538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.168 [2024-12-10 21:42:57.208562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.168 [2024-12-10 21:42:57.208613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.168 [2024-12-10 21:42:57.208642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.168 [2024-12-10 21:42:57.208670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.168 [2024-12-10 21:42:57.208692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:02.168 [2024-12-10 21:42:57.208757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bec60 (9): Bad file descriptor 00:16:02.168 [2024-12-10 21:42:57.212798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:02.168 [2024-12-10 21:42:57.241514] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:02.168 8411.50 IOPS, 32.86 MiB/s [2024-12-10T21:43:02.951Z] 8451.91 IOPS, 33.02 MiB/s [2024-12-10T21:43:02.951Z] 8490.50 IOPS, 33.17 MiB/s [2024-12-10T21:43:02.951Z] 8524.46 IOPS, 33.30 MiB/s [2024-12-10T21:43:02.951Z] 8550.43 IOPS, 33.40 MiB/s [2024-12-10T21:43:02.951Z] 8575.07 IOPS, 33.50 MiB/s 00:16:02.168 Latency(us) 00:16:02.168 [2024-12-10T21:43:02.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.168 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:02.168 Verification LBA range: start 0x0 length 0x4000 00:16:02.168 NVMe0n1 : 15.01 8574.89 33.50 225.26 0.00 14510.31 670.25 18588.39 00:16:02.168 [2024-12-10T21:43:02.951Z] =================================================================================================================== 00:16:02.168 [2024-12-10T21:43:02.951Z] Total : 8574.89 33.50 225.26 0.00 14510.31 670.25 18588.39 00:16:02.168 Received shutdown signal, test time was about 15.000000 seconds 00:16:02.168 00:16:02.168 Latency(us) 00:16:02.168 [2024-12-10T21:43:02.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.168 [2024-12-10T21:43:02.951Z] =================================================================================================================== 00:16:02.168 [2024-12-10T21:43:02.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:02.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75622 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75622 /var/tmp/bdevperf.sock 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75622 ']' 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.168 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.169 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.169 21:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:02.765 21:43:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.765 21:43:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:02.765 21:43:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:02.765 [2024-12-10 21:43:03.499655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:02.765 21:43:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:03.035 [2024-12-10 21:43:03.815947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:03.294 21:43:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:03.552 NVMe0n1 00:16:03.552 21:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:03.811 00:16:03.811 21:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:04.070 00:16:04.328 21:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:04.328 21:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:04.586 21:43:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:04.844 21:43:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:08.127 21:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:08.127 21:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:08.127 21:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:08.127 21:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75697 00:16:08.127 21:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75697 00:16:09.501 { 00:16:09.502 "results": [ 00:16:09.502 { 00:16:09.502 "job": "NVMe0n1", 00:16:09.502 "core_mask": "0x1", 00:16:09.502 "workload": "verify", 00:16:09.502 "status": "finished", 00:16:09.502 "verify_range": { 00:16:09.502 "start": 0, 00:16:09.502 "length": 16384 00:16:09.502 }, 00:16:09.502 "queue_depth": 128, 00:16:09.502 "io_size": 4096, 00:16:09.502 "runtime": 1.012288, 00:16:09.502 "iops": 8290.130871846748, 00:16:09.502 "mibps": 32.38332371815136, 00:16:09.502 "io_failed": 0, 00:16:09.502 "io_timeout": 0, 00:16:09.502 "avg_latency_us": 15348.598360343181, 00:16:09.502 "min_latency_us": 1556.48, 00:16:09.502 "max_latency_us": 15371.17090909091 00:16:09.502 } 00:16:09.502 ], 00:16:09.502 "core_count": 1 00:16:09.502 } 00:16:09.502 21:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:09.502 [2024-12-10 21:43:02.918088] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:16:09.502 [2024-12-10 21:43:02.918243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75622 ] 00:16:09.502 [2024-12-10 21:43:03.073754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.502 [2024-12-10 21:43:03.106528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.502 [2024-12-10 21:43:03.135733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.502 [2024-12-10 21:43:05.417056] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:09.502 [2024-12-10 21:43:05.417195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.502 [2024-12-10 21:43:05.417222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.502 [2024-12-10 21:43:05.417241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.502 [2024-12-10 21:43:05.417256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.502 [2024-12-10 21:43:05.417271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.502 [2024-12-10 21:43:05.417284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.502 [2024-12-10 21:43:05.417299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.502 [2024-12-10 21:43:05.417313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.502 [2024-12-10 21:43:05.417327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:16:09.502 [2024-12-10 21:43:05.417379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:16:09.502 [2024-12-10 21:43:05.417412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa76c60 (9): Bad file descriptor 00:16:09.502 [2024-12-10 21:43:05.425561] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:16:09.502 Running I/O for 1 seconds... 00:16:09.502 8256.00 IOPS, 32.25 MiB/s 00:16:09.502 Latency(us) 00:16:09.502 [2024-12-10T21:43:10.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.502 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:09.502 Verification LBA range: start 0x0 length 0x4000 00:16:09.502 NVMe0n1 : 1.01 8290.13 32.38 0.00 0.00 15348.60 1556.48 15371.17 00:16:09.502 [2024-12-10T21:43:10.285Z] =================================================================================================================== 00:16:09.502 [2024-12-10T21:43:10.285Z] Total : 8290.13 32.38 0.00 0.00 15348.60 1556.48 15371.17 00:16:09.502 21:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:09.502 21:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:09.760 21:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:10.017 21:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:10.017 21:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:10.275 21:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:10.532 21:43:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75622 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75622 ']' 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75622 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75622 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.846 killing process with pid 75622 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75622' 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75622 00:16:13.846 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75622 00:16:14.103 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:14.103 21:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:14.361 rmmod nvme_tcp 00:16:14.361 rmmod nvme_fabrics 00:16:14.361 rmmod nvme_keyring 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75369 ']' 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75369 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75369 ']' 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75369 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75369 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:14.361 killing process with pid 75369 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75369' 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75369 00:16:14.361 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75369 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:14.619 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:14.877 00:16:14.877 real 0m32.902s 00:16:14.877 user 2m7.446s 00:16:14.877 sys 0m5.477s 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:14.877 ************************************ 00:16:14.877 END TEST nvmf_failover 00:16:14.877 ************************************ 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.877 ************************************ 00:16:14.877 START TEST nvmf_host_discovery 00:16:14.877 ************************************ 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:14.877 * Looking for test storage... 00:16:14.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:14.877 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:15.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.136 --rc genhtml_branch_coverage=1 00:16:15.136 --rc genhtml_function_coverage=1 00:16:15.136 --rc genhtml_legend=1 00:16:15.136 --rc geninfo_all_blocks=1 00:16:15.136 --rc geninfo_unexecuted_blocks=1 00:16:15.136 00:16:15.136 ' 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:15.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.136 --rc genhtml_branch_coverage=1 00:16:15.136 --rc genhtml_function_coverage=1 00:16:15.136 --rc genhtml_legend=1 00:16:15.136 --rc geninfo_all_blocks=1 00:16:15.136 --rc geninfo_unexecuted_blocks=1 00:16:15.136 00:16:15.136 ' 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:15.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.136 --rc genhtml_branch_coverage=1 00:16:15.136 --rc genhtml_function_coverage=1 00:16:15.136 --rc genhtml_legend=1 00:16:15.136 --rc geninfo_all_blocks=1 00:16:15.136 --rc geninfo_unexecuted_blocks=1 00:16:15.136 00:16:15.136 ' 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:15.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.136 --rc genhtml_branch_coverage=1 00:16:15.136 --rc genhtml_function_coverage=1 00:16:15.136 --rc genhtml_legend=1 00:16:15.136 --rc geninfo_all_blocks=1 00:16:15.136 --rc geninfo_unexecuted_blocks=1 00:16:15.136 00:16:15.136 ' 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:16:15.136 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.137 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:15.138 Cannot find device "nvmf_init_br" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:15.138 Cannot find device "nvmf_init_br2" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:15.138 Cannot find device "nvmf_tgt_br" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.138 Cannot find device "nvmf_tgt_br2" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:15.138 Cannot find device "nvmf_init_br" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:15.138 Cannot find device "nvmf_init_br2" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:15.138 Cannot find device "nvmf_tgt_br" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:15.138 Cannot find device "nvmf_tgt_br2" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:15.138 Cannot find device "nvmf_br" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:15.138 Cannot find device "nvmf_init_if" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:15.138 Cannot find device "nvmf_init_if2" 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:15.138 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.396 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.396 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:15.396 21:43:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:15.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:16:15.396 00:16:15.396 --- 10.0.0.3 ping statistics --- 00:16:15.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.396 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:15.396 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:15.396 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:16:15.396 00:16:15.396 --- 10.0.0.4 ping statistics --- 00:16:15.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.396 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:15.396 00:16:15.396 --- 10.0.0.1 ping statistics --- 00:16:15.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.396 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:15.396 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:15.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:15.655 00:16:15.655 --- 10.0.0.2 ping statistics --- 00:16:15.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.655 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76024 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76024 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76024 ']' 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.655 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.655 [2024-12-10 21:43:16.274000] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:16:15.655 [2024-12-10 21:43:16.274102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.655 [2024-12-10 21:43:16.425136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.914 [2024-12-10 21:43:16.466043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.914 [2024-12-10 21:43:16.466109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.914 [2024-12-10 21:43:16.466123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.914 [2024-12-10 21:43:16.466132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.914 [2024-12-10 21:43:16.466141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.914 [2024-12-10 21:43:16.466520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.914 [2024-12-10 21:43:16.498269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.914 [2024-12-10 21:43:16.591897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.914 [2024-12-10 21:43:16.600012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.914 null0 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.914 null1 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76050 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76050 /tmp/host.sock 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76050 ']' 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.914 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.914 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.173 [2024-12-10 21:43:16.698568] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:16:16.173 [2024-12-10 21:43:16.698684] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76050 ] 00:16:16.173 [2024-12-10 21:43:16.849346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.173 [2024-12-10 21:43:16.885032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.173 [2024-12-10 21:43:16.915675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.432 21:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.432 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 [2024-12-10 21:43:17.324164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:16.691 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.950 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:16:16.951 21:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:16:17.209 [2024-12-10 21:43:17.985193] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:17.209 [2024-12-10 21:43:17.985234] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:17.209 [2024-12-10 21:43:17.985261] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:17.466 [2024-12-10 21:43:17.991291] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:17.466 [2024-12-10 21:43:18.045800] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:17.466 [2024-12-10 21:43:18.046918] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa06dc0:1 started. 00:16:17.466 [2024-12-10 21:43:18.048898] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:17.466 [2024-12-10 21:43:18.048930] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:17.466 [2024-12-10 21:43:18.053669] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa06dc0 was disconnected and freed. delete nvme_qpair. 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.034 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.035 [2024-12-10 21:43:18.797601] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa150b0:1 started. 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.035 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.035 [2024-12-10 21:43:18.804174] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa150b0 was disconnected and freed. delete nvme_qpair. 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.294 [2024-12-10 21:43:18.909605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:18.294 [2024-12-10 21:43:18.910156] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:18.294 [2024-12-10 21:43:18.910199] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:18.294 [2024-12-10 21:43:18.916137] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:18.294 [2024-12-10 21:43:18.978751] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:18.294 [2024-12-10 21:43:18.978816] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:18.294 [2024-12-10 21:43:18.978828] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:18.294 [2024-12-10 21:43:18.978834] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.294 21:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:18.294 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.553 [2024-12-10 21:43:19.146912] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:18.553 [2024-12-10 21:43:19.146955] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:18.553 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:18.553 [2024-12-10 21:43:19.152898] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:18.553 [2024-12-10 21:43:19.152926] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:18.553 [2024-12-10 21:43:19.153047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.553 [2024-12-10 21:43:19.153083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.553 [2024-12-10 21:43:19.153097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.554 [2024-12-10 21:43:19.153107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.554 [2024-12-10 21:43:19.153117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.554 [2024-12-10 21:43:19.153127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.554 [2024-12-10 21:43:19.153137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:18.554 [2024-12-10 21:43:19.153146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:18.554 [2024-12-10 21:43:19.153155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e2fb0 is same with the state(6) to be set 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.554 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.813 21:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.199 [2024-12-10 21:43:20.556690] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:20.199 [2024-12-10 21:43:20.556731] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:20.199 [2024-12-10 21:43:20.556752] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:20.199 [2024-12-10 21:43:20.562725] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:20.199 [2024-12-10 21:43:20.621048] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:20.199 [2024-12-10 21:43:20.621876] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xa041b0:1 started. 00:16:20.199 [2024-12-10 21:43:20.623964] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:20.199 [2024-12-10 21:43:20.624013] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.199 [2024-12-10 21:43:20.625758] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xa041b0 was disconnected and freed. delete nvme_qpair. 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.199 request: 00:16:20.199 { 00:16:20.199 "name": "nvme", 00:16:20.199 "trtype": "tcp", 00:16:20.199 "traddr": "10.0.0.3", 00:16:20.199 "adrfam": "ipv4", 00:16:20.199 "trsvcid": "8009", 00:16:20.199 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:20.199 "wait_for_attach": true, 00:16:20.199 "method": "bdev_nvme_start_discovery", 00:16:20.199 "req_id": 1 00:16:20.199 } 00:16:20.199 Got JSON-RPC error response 00:16:20.199 response: 00:16:20.199 { 00:16:20.199 "code": -17, 00:16:20.199 "message": "File exists" 00:16:20.199 } 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:20.199 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.200 request: 00:16:20.200 { 00:16:20.200 "name": "nvme_second", 00:16:20.200 "trtype": "tcp", 00:16:20.200 "traddr": "10.0.0.3", 00:16:20.200 "adrfam": "ipv4", 00:16:20.200 "trsvcid": "8009", 00:16:20.200 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:20.200 "wait_for_attach": true, 00:16:20.200 "method": "bdev_nvme_start_discovery", 00:16:20.200 "req_id": 1 00:16:20.200 } 00:16:20.200 Got JSON-RPC error response 00:16:20.200 response: 00:16:20.200 { 00:16:20.200 "code": -17, 00:16:20.200 "message": "File exists" 00:16:20.200 } 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.200 21:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:21.136 [2024-12-10 21:43:21.888352] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:21.136 [2024-12-10 21:43:21.888430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa06740 with addr=10.0.0.3, port=8010 00:16:21.136 [2024-12-10 21:43:21.888465] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:21.136 [2024-12-10 21:43:21.888478] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:21.136 [2024-12-10 21:43:21.888488] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:22.510 [2024-12-10 21:43:22.888343] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:22.510 [2024-12-10 21:43:22.888420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa077d0 with addr=10.0.0.3, port=8010 00:16:22.510 [2024-12-10 21:43:22.888455] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:22.510 [2024-12-10 21:43:22.888468] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:22.510 [2024-12-10 21:43:22.888478] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:23.444 [2024-12-10 21:43:23.888188] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:23.444 request: 00:16:23.444 { 00:16:23.444 "name": "nvme_second", 00:16:23.444 "trtype": "tcp", 00:16:23.444 "traddr": "10.0.0.3", 00:16:23.444 "adrfam": "ipv4", 00:16:23.444 "trsvcid": "8010", 00:16:23.444 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:23.444 "wait_for_attach": false, 00:16:23.444 "attach_timeout_ms": 3000, 00:16:23.444 "method": "bdev_nvme_start_discovery", 00:16:23.444 "req_id": 1 00:16:23.444 } 00:16:23.444 Got JSON-RPC error response 00:16:23.444 response: 00:16:23.444 { 00:16:23.444 "code": -110, 00:16:23.444 "message": "Connection timed out" 00:16:23.444 } 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76050 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:23.444 21:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:23.444 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:23.444 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:23.444 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:23.445 rmmod nvme_tcp 00:16:23.445 rmmod nvme_fabrics 00:16:23.445 rmmod nvme_keyring 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76024 ']' 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76024 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76024 ']' 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76024 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76024 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:23.445 killing process with pid 76024 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76024' 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76024 00:16:23.445 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76024 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:23.729 00:16:23.729 real 0m8.918s 00:16:23.729 user 0m16.911s 00:16:23.729 sys 0m1.876s 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.729 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:23.729 ************************************ 00:16:23.729 END TEST nvmf_host_discovery 00:16:23.729 ************************************ 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.001 ************************************ 00:16:24.001 START TEST nvmf_host_multipath_status 00:16:24.001 ************************************ 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:24.001 * Looking for test storage... 00:16:24.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.001 --rc genhtml_branch_coverage=1 00:16:24.001 --rc genhtml_function_coverage=1 00:16:24.001 --rc genhtml_legend=1 00:16:24.001 --rc geninfo_all_blocks=1 00:16:24.001 --rc geninfo_unexecuted_blocks=1 00:16:24.001 00:16:24.001 ' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.001 --rc genhtml_branch_coverage=1 00:16:24.001 --rc genhtml_function_coverage=1 00:16:24.001 --rc genhtml_legend=1 00:16:24.001 --rc geninfo_all_blocks=1 00:16:24.001 --rc geninfo_unexecuted_blocks=1 00:16:24.001 00:16:24.001 ' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.001 --rc genhtml_branch_coverage=1 00:16:24.001 --rc genhtml_function_coverage=1 00:16:24.001 --rc genhtml_legend=1 00:16:24.001 --rc geninfo_all_blocks=1 00:16:24.001 --rc geninfo_unexecuted_blocks=1 00:16:24.001 00:16:24.001 ' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:24.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.001 --rc genhtml_branch_coverage=1 00:16:24.001 --rc genhtml_function_coverage=1 00:16:24.001 --rc genhtml_legend=1 00:16:24.001 --rc geninfo_all_blocks=1 00:16:24.001 --rc geninfo_unexecuted_blocks=1 00:16:24.001 00:16:24.001 ' 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.001 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:24.002 Cannot find device "nvmf_init_br" 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:24.002 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:24.261 Cannot find device "nvmf_init_br2" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:24.261 Cannot find device "nvmf_tgt_br" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.261 Cannot find device "nvmf_tgt_br2" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:24.261 Cannot find device "nvmf_init_br" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:24.261 Cannot find device "nvmf_init_br2" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:24.261 Cannot find device "nvmf_tgt_br" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:24.261 Cannot find device "nvmf_tgt_br2" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:24.261 Cannot find device "nvmf_br" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:24.261 Cannot find device "nvmf_init_if" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:24.261 Cannot find device "nvmf_init_if2" 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:24.261 21:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.261 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.261 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.261 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:24.261 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:24.261 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.522 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:24.522 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:24.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:16:24.523 00:16:24.523 --- 10.0.0.3 ping statistics --- 00:16:24.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.523 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:24.523 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:24.523 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:16:24.523 00:16:24.523 --- 10.0.0.4 ping statistics --- 00:16:24.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.523 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:24.523 00:16:24.523 --- 10.0.0.1 ping statistics --- 00:16:24.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.523 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:24.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:24.523 00:16:24.523 --- 10.0.0.2 ping statistics --- 00:16:24.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.523 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:24.523 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76540 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76540 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76540 ']' 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.524 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:24.524 [2024-12-10 21:43:25.202554] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:16:24.524 [2024-12-10 21:43:25.202647] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.788 [2024-12-10 21:43:25.352584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.788 [2024-12-10 21:43:25.390255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.788 [2024-12-10 21:43:25.390543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.788 [2024-12-10 21:43:25.390741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.788 [2024-12-10 21:43:25.390895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.788 [2024-12-10 21:43:25.390938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.788 [2024-12-10 21:43:25.391952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.788 [2024-12-10 21:43:25.391965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.788 [2024-12-10 21:43:25.424175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76540 00:16:24.788 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:25.353 [2024-12-10 21:43:25.874714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.353 21:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:25.612 Malloc0 00:16:25.612 21:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:25.870 21:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.127 21:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:26.385 [2024-12-10 21:43:26.965425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:26.385 21:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:26.643 [2024-12-10 21:43:27.221598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76588 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76588 /var/tmp/bdevperf.sock 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76588 ']' 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.643 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:26.901 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.901 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:26.901 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:27.159 21:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:27.417 Nvme0n1 00:16:27.675 21:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:27.933 Nvme0n1 00:16:27.933 21:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:27.933 21:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:30.463 21:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:30.463 21:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:30.463 21:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:30.721 21:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:31.657 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:31.657 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:31.657 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.657 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:31.915 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.915 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:31.915 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.915 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:32.174 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.174 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:32.174 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.174 21:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:32.741 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.741 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:32.741 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.742 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.000 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.000 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:33.000 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.000 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:33.258 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.258 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:33.258 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:33.258 21:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.516 21:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.516 21:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:33.516 21:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:33.774 21:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:34.032 21:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:34.966 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:34.966 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:34.966 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.966 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:35.224 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:35.224 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:35.224 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:35.224 21:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.482 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.482 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:35.482 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.482 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.047 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.047 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.047 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.047 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.305 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.305 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.305 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:36.305 21:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.563 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.563 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:36.563 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.563 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:36.821 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.821 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:36.821 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:37.079 21:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:37.337 21:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.711 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.276 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:39.276 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.276 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.276 21:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:39.534 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.534 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:39.534 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.534 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:39.792 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.792 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:39.792 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.792 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.050 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.050 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:40.050 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.050 21:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.663 21:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.663 21:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:40.663 21:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:40.663 21:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:41.228 21:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:42.162 21:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:42.162 21:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:42.162 21:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.162 21:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:42.420 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.420 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:42.420 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.420 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:42.678 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.678 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:42.678 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.678 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:43.244 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.244 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:43.244 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.244 21:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:43.501 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.501 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:43.501 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.501 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:43.760 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.760 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:43.760 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.760 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:44.018 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:44.018 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:44.018 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:44.276 21:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:44.533 21:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:45.472 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:45.472 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:45.472 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.472 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.038 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.038 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:46.038 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.038 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:46.296 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.296 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:46.296 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.296 21:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:46.862 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.862 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:46.862 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.862 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:47.119 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.119 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:47.119 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.119 21:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.683 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.683 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:47.683 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.683 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:47.683 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.683 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:47.683 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:48.249 21:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:48.508 21:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:49.527 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:49.527 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:49.527 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.527 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:49.787 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:49.787 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:49.787 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.787 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:50.046 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.046 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:50.046 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.046 21:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:50.305 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.305 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:50.305 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.305 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:50.874 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.874 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:50.874 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:50.874 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.132 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:51.132 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:51.132 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:51.132 21:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:51.392 21:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:51.392 21:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:51.651 21:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:51.651 21:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:51.910 21:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:52.169 21:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:53.544 21:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:53.544 21:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:53.544 21:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.544 21:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:53.544 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:53.544 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:53.544 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.544 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:54.110 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.110 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:54.110 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.110 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:54.368 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.368 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:54.368 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.368 21:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:54.625 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.625 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:54.625 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.625 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:54.881 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.881 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:54.881 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.882 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:55.139 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.139 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:55.139 21:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:55.397 21:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:55.962 21:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:56.898 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:56.898 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:56.899 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.899 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:57.157 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.157 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:57.157 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.157 21:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:57.416 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.416 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:57.416 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.416 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:57.674 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.674 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:57.674 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.674 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:57.932 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.932 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:57.932 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.932 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:58.190 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.190 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:58.190 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:58.190 21:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.756 21:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.756 21:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:58.756 21:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:59.014 21:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:59.274 21:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:00.210 21:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:00.210 21:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:00.210 21:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.210 21:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:00.469 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.470 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:00.470 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.470 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:01.043 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.043 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:01.043 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.043 21:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:01.300 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.300 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:01.300 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:01.300 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.867 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.867 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:01.867 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.867 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:02.125 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.125 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:02.125 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.125 21:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:02.383 21:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.383 21:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:02.383 21:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:02.642 21:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:02.900 21:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:03.834 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:03.834 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:03.834 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.834 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:04.093 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.093 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:04.093 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.093 21:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:04.666 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:04.666 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:04.666 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.666 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:04.923 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.923 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:04.923 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.923 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:05.181 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.181 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:05.181 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.181 21:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:05.439 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.439 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:05.439 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.439 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76588 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76588 ']' 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76588 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76588 00:17:05.698 killing process with pid 76588 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76588' 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76588 00:17:05.698 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76588 00:17:05.698 { 00:17:05.698 "results": [ 00:17:05.698 { 00:17:05.698 "job": "Nvme0n1", 00:17:05.698 "core_mask": "0x4", 00:17:05.698 "workload": "verify", 00:17:05.698 "status": "terminated", 00:17:05.698 "verify_range": { 00:17:05.698 "start": 0, 00:17:05.698 "length": 16384 00:17:05.698 }, 00:17:05.698 "queue_depth": 128, 00:17:05.698 "io_size": 4096, 00:17:05.698 "runtime": 37.568519, 00:17:05.698 "iops": 8099.973278158769, 00:17:05.698 "mibps": 31.64052061780769, 00:17:05.698 "io_failed": 0, 00:17:05.698 "io_timeout": 0, 00:17:05.698 "avg_latency_us": 15770.932745723176, 00:17:05.698 "min_latency_us": 506.4145454545455, 00:17:05.698 "max_latency_us": 5033164.8 00:17:05.698 } 00:17:05.698 ], 00:17:05.698 "core_count": 1 00:17:05.698 } 00:17:05.960 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76588 00:17:05.960 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:05.960 [2024-12-10 21:43:27.287457] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:17:05.960 [2024-12-10 21:43:27.287567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76588 ] 00:17:05.960 [2024-12-10 21:43:27.433141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.960 [2024-12-10 21:43:27.473400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.960 [2024-12-10 21:43:27.507857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:05.960 Running I/O for 90 seconds... 00:17:05.960 6676.00 IOPS, 26.08 MiB/s [2024-12-10T21:44:06.743Z] 6994.50 IOPS, 27.32 MiB/s [2024-12-10T21:44:06.743Z] 7657.67 IOPS, 29.91 MiB/s [2024-12-10T21:44:06.743Z] 8033.75 IOPS, 31.38 MiB/s [2024-12-10T21:44:06.743Z] 8234.60 IOPS, 32.17 MiB/s [2024-12-10T21:44:06.743Z] 8375.17 IOPS, 32.72 MiB/s [2024-12-10T21:44:06.743Z] 8457.57 IOPS, 33.04 MiB/s [2024-12-10T21:44:06.743Z] 8544.38 IOPS, 33.38 MiB/s [2024-12-10T21:44:06.743Z] 8573.00 IOPS, 33.49 MiB/s [2024-12-10T21:44:06.743Z] 8586.00 IOPS, 33.54 MiB/s [2024-12-10T21:44:06.743Z] 8607.64 IOPS, 33.62 MiB/s [2024-12-10T21:44:06.743Z] 8645.08 IOPS, 33.77 MiB/s [2024-12-10T21:44:06.743Z] 8682.54 IOPS, 33.92 MiB/s [2024-12-10T21:44:06.743Z] 8711.50 IOPS, 34.03 MiB/s [2024-12-10T21:44:06.743Z] 8732.33 IOPS, 34.11 MiB/s [2024-12-10T21:44:06.743Z] 8741.56 IOPS, 34.15 MiB/s [2024-12-10T21:44:06.743Z] [2024-12-10 21:43:44.895723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.960 [2024-12-10 21:43:44.895803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:05.960 [2024-12-10 21:43:44.895865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.960 [2024-12-10 21:43:44.895889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.895913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.895929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.895952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.895968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.895990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.896635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.896982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.896998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.897036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.897074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.897112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.897151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.897197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.897236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.961 [2024-12-10 21:43:44.897274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.897312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.897349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.897393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.897432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:05.961 [2024-12-10 21:43:44.897486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.961 [2024-12-10 21:43:44.897516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.897559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.897597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.897635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.897673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.897722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.897763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.897801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.897840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.897878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.897916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.897955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.897977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.897993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.898432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.898983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.899022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.899037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.899060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.962 [2024-12-10 21:43:44.899092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:05.962 [2024-12-10 21:43:44.899118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.962 [2024-12-10 21:43:44.899134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.899754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.899817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.899857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.899895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.899933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.899971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.899993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.900009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.900047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.963 [2024-12-10 21:43:44.900085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.900670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.900686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.901457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.963 [2024-12-10 21:43:44.901500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:05.963 [2024-12-10 21:43:44.901544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:43:44.901594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:43:44.901642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:43:44.901688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:43:44.901734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:43:44.901780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:43:44.901827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:43:44.901890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:43:44.901911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:05.964 8281.00 IOPS, 32.35 MiB/s [2024-12-10T21:44:06.747Z] 7820.94 IOPS, 30.55 MiB/s [2024-12-10T21:44:06.747Z] 7409.32 IOPS, 28.94 MiB/s [2024-12-10T21:44:06.747Z] 7038.85 IOPS, 27.50 MiB/s [2024-12-10T21:44:06.747Z] 6703.67 IOPS, 26.19 MiB/s [2024-12-10T21:44:06.747Z] 6765.18 IOPS, 26.43 MiB/s [2024-12-10T21:44:06.747Z] 6857.48 IOPS, 26.79 MiB/s [2024-12-10T21:44:06.747Z] 6965.42 IOPS, 27.21 MiB/s [2024-12-10T21:44:06.747Z] 7161.40 IOPS, 27.97 MiB/s [2024-12-10T21:44:06.747Z] 7247.12 IOPS, 28.31 MiB/s [2024-12-10T21:44:06.747Z] 7384.59 IOPS, 28.85 MiB/s [2024-12-10T21:44:06.747Z] 7483.50 IOPS, 29.23 MiB/s [2024-12-10T21:44:06.747Z] 7534.69 IOPS, 29.43 MiB/s [2024-12-10T21:44:06.747Z] 7578.70 IOPS, 29.60 MiB/s [2024-12-10T21:44:06.747Z] 7624.23 IOPS, 29.78 MiB/s [2024-12-10T21:44:06.747Z] 7751.84 IOPS, 30.28 MiB/s [2024-12-10T21:44:06.747Z] 7860.67 IOPS, 30.71 MiB/s [2024-12-10T21:44:06.747Z] 7983.35 IOPS, 31.18 MiB/s [2024-12-10T21:44:06.747Z] [2024-12-10 21:44:03.544269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.544352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.544496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.544801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.544838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.544875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.544912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.544961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.544985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.545001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.545329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.964 [2024-12-10 21:44:03.545366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.964 [2024-12-10 21:44:03.545587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:05.964 [2024-12-10 21:44:03.545609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.545625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.545672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.545709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.545747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.545785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.545823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.545885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.545925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.545963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.545996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.546632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.546654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.546670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.548144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.548209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.965 [2024-12-10 21:44:03.548252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.548292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.548330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.548369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.548407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.965 [2024-12-10 21:44:03.548477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:05.965 [2024-12-10 21:44:03.548503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.966 [2024-12-10 21:44:03.548519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:05.966 [2024-12-10 21:44:03.548541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.966 [2024-12-10 21:44:03.548557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:05.966 [2024-12-10 21:44:03.548580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.966 [2024-12-10 21:44:03.548596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:05.966 [2024-12-10 21:44:03.548618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.966 [2024-12-10 21:44:03.548634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:05.966 [2024-12-10 21:44:03.548656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:05.966 [2024-12-10 21:44:03.548672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:05.966 8072.69 IOPS, 31.53 MiB/s [2024-12-10T21:44:06.749Z] 8101.11 IOPS, 31.64 MiB/s [2024-12-10T21:44:06.749Z] 8118.05 IOPS, 31.71 MiB/s [2024-12-10T21:44:06.749Z] Received shutdown signal, test time was about 37.569378 seconds 00:17:05.966 00:17:05.966 Latency(us) 00:17:05.966 [2024-12-10T21:44:06.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.966 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:05.966 Verification LBA range: start 0x0 length 0x4000 00:17:05.966 Nvme0n1 : 37.57 8099.97 31.64 0.00 0.00 15770.93 506.41 5033164.80 00:17:05.966 [2024-12-10T21:44:06.749Z] =================================================================================================================== 00:17:05.966 [2024-12-10T21:44:06.749Z] Total : 8099.97 31.64 0.00 0.00 15770.93 506.41 5033164.80 00:17:05.966 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.225 rmmod nvme_tcp 00:17:06.225 rmmod nvme_fabrics 00:17:06.225 rmmod nvme_keyring 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76540 ']' 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76540 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76540 ']' 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76540 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76540 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.225 killing process with pid 76540 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76540' 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76540 00:17:06.225 21:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76540 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:06.483 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:06.741 ************************************ 00:17:06.741 END TEST nvmf_host_multipath_status 00:17:06.741 ************************************ 00:17:06.741 00:17:06.741 real 0m42.829s 00:17:06.741 user 2m20.546s 00:17:06.741 sys 0m12.615s 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.741 ************************************ 00:17:06.741 START TEST nvmf_discovery_remove_ifc 00:17:06.741 ************************************ 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:06.741 * Looking for test storage... 00:17:06.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:06.741 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:07.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.000 --rc genhtml_branch_coverage=1 00:17:07.000 --rc genhtml_function_coverage=1 00:17:07.000 --rc genhtml_legend=1 00:17:07.000 --rc geninfo_all_blocks=1 00:17:07.000 --rc geninfo_unexecuted_blocks=1 00:17:07.000 00:17:07.000 ' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:07.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.000 --rc genhtml_branch_coverage=1 00:17:07.000 --rc genhtml_function_coverage=1 00:17:07.000 --rc genhtml_legend=1 00:17:07.000 --rc geninfo_all_blocks=1 00:17:07.000 --rc geninfo_unexecuted_blocks=1 00:17:07.000 00:17:07.000 ' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:07.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.000 --rc genhtml_branch_coverage=1 00:17:07.000 --rc genhtml_function_coverage=1 00:17:07.000 --rc genhtml_legend=1 00:17:07.000 --rc geninfo_all_blocks=1 00:17:07.000 --rc geninfo_unexecuted_blocks=1 00:17:07.000 00:17:07.000 ' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:07.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.000 --rc genhtml_branch_coverage=1 00:17:07.000 --rc genhtml_function_coverage=1 00:17:07.000 --rc genhtml_legend=1 00:17:07.000 --rc geninfo_all_blocks=1 00:17:07.000 --rc geninfo_unexecuted_blocks=1 00:17:07.000 00:17:07.000 ' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.000 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:07.001 Cannot find device "nvmf_init_br" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:07.001 Cannot find device "nvmf_init_br2" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:07.001 Cannot find device "nvmf_tgt_br" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.001 Cannot find device "nvmf_tgt_br2" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:07.001 Cannot find device "nvmf_init_br" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:07.001 Cannot find device "nvmf_init_br2" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:07.001 Cannot find device "nvmf_tgt_br" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:07.001 Cannot find device "nvmf_tgt_br2" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:07.001 Cannot find device "nvmf_br" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:07.001 Cannot find device "nvmf_init_if" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:07.001 Cannot find device "nvmf_init_if2" 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:07.001 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.002 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.002 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.002 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:07.260 00:17:07.260 --- 10.0.0.3 ping statistics --- 00:17:07.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.260 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.260 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.260 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:17:07.260 00:17:07.260 --- 10.0.0.4 ping statistics --- 00:17:07.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.260 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:07.260 00:17:07.260 --- 10.0.0.1 ping statistics --- 00:17:07.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.260 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:07.260 00:17:07.260 --- 10.0.0.2 ping statistics --- 00:17:07.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.260 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77468 00:17:07.260 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77468 00:17:07.261 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.261 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77468 ']' 00:17:07.261 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.261 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.261 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.261 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.261 21:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.518 [2024-12-10 21:44:08.045485] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:17:07.518 [2024-12-10 21:44:08.045587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.518 [2024-12-10 21:44:08.198592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.518 [2024-12-10 21:44:08.235304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.519 [2024-12-10 21:44:08.235363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.519 [2024-12-10 21:44:08.235376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.519 [2024-12-10 21:44:08.235386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.519 [2024-12-10 21:44:08.235395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.519 [2024-12-10 21:44:08.235770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.519 [2024-12-10 21:44:08.267421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 [2024-12-10 21:44:08.365606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.776 [2024-12-10 21:44:08.373753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:07.776 null0 00:17:07.776 [2024-12-10 21:44:08.405718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77494 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77494 /tmp/host.sock 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77494 ']' 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.776 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.776 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 [2024-12-10 21:44:08.487322] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:17:07.776 [2024-12-10 21:44:08.487401] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77494 ] 00:17:08.036 [2024-12-10 21:44:08.635943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.036 [2024-12-10 21:44:08.675195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.036 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.036 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:08.037 [2024-12-10 21:44:08.751678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.037 21:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.411 [2024-12-10 21:44:09.788736] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:09.411 [2024-12-10 21:44:09.788780] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:09.411 [2024-12-10 21:44:09.788805] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:09.411 [2024-12-10 21:44:09.794784] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:09.411 [2024-12-10 21:44:09.849160] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:09.411 [2024-12-10 21:44:09.850166] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c1dfb0:1 started. 00:17:09.411 [2024-12-10 21:44:09.851873] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:09.411 [2024-12-10 21:44:09.851933] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:09.411 [2024-12-10 21:44:09.851961] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:09.411 [2024-12-10 21:44:09.851979] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:09.411 [2024-12-10 21:44:09.852007] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.411 [2024-12-10 21:44:09.857381] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c1dfb0 was disconnected and freed. delete nvme_qpair. 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:09.411 21:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:10.344 21:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.344 21:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:10.344 21:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:11.277 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.536 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:11.536 21:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:12.472 21:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:13.406 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.665 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:13.665 21:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:14.599 21:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:14.599 [2024-12-10 21:44:15.279728] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:14.599 [2024-12-10 21:44:15.279824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.599 [2024-12-10 21:44:15.279842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.599 [2024-12-10 21:44:15.279855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.599 [2024-12-10 21:44:15.279865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.599 [2024-12-10 21:44:15.279875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.599 [2024-12-10 21:44:15.279883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.599 [2024-12-10 21:44:15.279894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.599 [2024-12-10 21:44:15.279903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.599 [2024-12-10 21:44:15.279913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.599 [2024-12-10 21:44:15.279922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.599 [2024-12-10 21:44:15.279931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e20 is same with the state(6) to be set 00:17:14.599 [2024-12-10 21:44:15.289717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e20 (9): Bad file descriptor 00:17:14.599 [2024-12-10 21:44:15.299740] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:14.599 [2024-12-10 21:44:15.299770] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:14.599 [2024-12-10 21:44:15.299778] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:14.599 [2024-12-10 21:44:15.299785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:14.599 [2024-12-10 21:44:15.299828] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:15.567 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:15.567 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.567 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:15.567 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:15.567 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:15.567 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.567 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:15.567 [2024-12-10 21:44:16.335576] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:15.567 [2024-12-10 21:44:16.335706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16e20 with addr=10.0.0.3, port=4420 00:17:15.567 [2024-12-10 21:44:16.335744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16e20 is same with the state(6) to be set 00:17:15.567 [2024-12-10 21:44:16.335814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16e20 (9): Bad file descriptor 00:17:15.567 [2024-12-10 21:44:16.336729] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:17:15.567 [2024-12-10 21:44:16.337036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:15.567 [2024-12-10 21:44:16.337254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:15.567 [2024-12-10 21:44:16.337511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:15.567 [2024-12-10 21:44:16.337737] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:15.567 [2024-12-10 21:44:16.337776] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:15.567 [2024-12-10 21:44:16.337789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:15.567 [2024-12-10 21:44:16.337947] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:15.567 [2024-12-10 21:44:16.337978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:15.825 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.825 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:15.825 21:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:16.777 [2024-12-10 21:44:17.338207] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:16.777 [2024-12-10 21:44:17.338267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:16.777 [2024-12-10 21:44:17.338303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:16.777 [2024-12-10 21:44:17.338314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:16.777 [2024-12-10 21:44:17.338324] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:17:16.777 [2024-12-10 21:44:17.338334] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:16.777 [2024-12-10 21:44:17.338340] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:16.777 [2024-12-10 21:44:17.338345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:16.777 [2024-12-10 21:44:17.338382] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:16.777 [2024-12-10 21:44:17.338433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.777 [2024-12-10 21:44:17.338462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.777 [2024-12-10 21:44:17.338478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.777 [2024-12-10 21:44:17.338487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.777 [2024-12-10 21:44:17.338497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.777 [2024-12-10 21:44:17.338506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.777 [2024-12-10 21:44:17.338516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.777 [2024-12-10 21:44:17.338525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.777 [2024-12-10 21:44:17.338535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:16.777 [2024-12-10 21:44:17.338544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:16.777 [2024-12-10 21:44:17.338553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:17:16.777 [2024-12-10 21:44:17.339107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba2a20 (9): Bad file descriptor 00:17:16.777 [2024-12-10 21:44:17.340108] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:16.777 [2024-12-10 21:44:17.340136] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:16.777 21:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:17.754 21:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:18.689 [2024-12-10 21:44:19.343755] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:18.689 [2024-12-10 21:44:19.343793] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:18.689 [2024-12-10 21:44:19.343815] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:18.689 [2024-12-10 21:44:19.349793] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:18.689 [2024-12-10 21:44:19.404132] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:18.689 [2024-12-10 21:44:19.404924] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c36060:1 started. 00:17:18.689 [2024-12-10 21:44:19.406246] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:18.689 [2024-12-10 21:44:19.406297] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:18.689 [2024-12-10 21:44:19.406321] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:18.689 [2024-12-10 21:44:19.406339] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:18.689 [2024-12-10 21:44:19.406348] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:18.689 [2024-12-10 21:44:19.412477] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c36060 was disconnected and freed. delete nvme_qpair. 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77494 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77494 ']' 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77494 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:18.947 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.948 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77494 00:17:18.948 killing process with pid 77494 00:17:18.948 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.948 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.948 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77494' 00:17:18.948 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77494 00:17:18.948 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77494 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.207 rmmod nvme_tcp 00:17:19.207 rmmod nvme_fabrics 00:17:19.207 rmmod nvme_keyring 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77468 ']' 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77468 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77468 ']' 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77468 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77468 00:17:19.207 killing process with pid 77468 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77468' 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77468 00:17:19.207 21:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77468 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.466 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:19.725 00:17:19.725 real 0m12.884s 00:17:19.725 user 0m21.829s 00:17:19.725 sys 0m2.421s 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:19.725 ************************************ 00:17:19.725 END TEST nvmf_discovery_remove_ifc 00:17:19.725 ************************************ 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.725 ************************************ 00:17:19.725 START TEST nvmf_identify_kernel_target 00:17:19.725 ************************************ 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:19.725 * Looking for test storage... 00:17:19.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.725 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:19.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.726 --rc genhtml_branch_coverage=1 00:17:19.726 --rc genhtml_function_coverage=1 00:17:19.726 --rc genhtml_legend=1 00:17:19.726 --rc geninfo_all_blocks=1 00:17:19.726 --rc geninfo_unexecuted_blocks=1 00:17:19.726 00:17:19.726 ' 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:19.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.726 --rc genhtml_branch_coverage=1 00:17:19.726 --rc genhtml_function_coverage=1 00:17:19.726 --rc genhtml_legend=1 00:17:19.726 --rc geninfo_all_blocks=1 00:17:19.726 --rc geninfo_unexecuted_blocks=1 00:17:19.726 00:17:19.726 ' 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:19.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.726 --rc genhtml_branch_coverage=1 00:17:19.726 --rc genhtml_function_coverage=1 00:17:19.726 --rc genhtml_legend=1 00:17:19.726 --rc geninfo_all_blocks=1 00:17:19.726 --rc geninfo_unexecuted_blocks=1 00:17:19.726 00:17:19.726 ' 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:19.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.726 --rc genhtml_branch_coverage=1 00:17:19.726 --rc genhtml_function_coverage=1 00:17:19.726 --rc genhtml_legend=1 00:17:19.726 --rc geninfo_all_blocks=1 00:17:19.726 --rc geninfo_unexecuted_blocks=1 00:17:19.726 00:17:19.726 ' 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.726 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.987 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:19.987 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:19.988 Cannot find device "nvmf_init_br" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:19.988 Cannot find device "nvmf_init_br2" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:19.988 Cannot find device "nvmf_tgt_br" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.988 Cannot find device "nvmf_tgt_br2" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:19.988 Cannot find device "nvmf_init_br" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:19.988 Cannot find device "nvmf_init_br2" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:19.988 Cannot find device "nvmf_tgt_br" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:19.988 Cannot find device "nvmf_tgt_br2" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:19.988 Cannot find device "nvmf_br" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:19.988 Cannot find device "nvmf_init_if" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:19.988 Cannot find device "nvmf_init_if2" 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:19.988 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:20.247 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:20.248 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.248 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:17:20.248 00:17:20.248 --- 10.0.0.3 ping statistics --- 00:17:20.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.248 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:20.248 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:20.248 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:20.248 00:17:20.248 --- 10.0.0.4 ping statistics --- 00:17:20.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.248 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:20.248 00:17:20.248 --- 10.0.0.1 ping statistics --- 00:17:20.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.248 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:20.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:20.248 00:17:20.248 --- 10.0.0.2 ping statistics --- 00:17:20.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.248 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:20.248 21:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:20.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:20.765 Waiting for block devices as requested 00:17:20.765 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:20.765 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:20.765 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:21.024 No valid GPT data, bailing 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:21.024 No valid GPT data, bailing 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:21.024 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:21.025 No valid GPT data, bailing 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:21.025 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:21.025 No valid GPT data, bailing 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -a 10.0.0.1 -t tcp -s 4420 00:17:21.285 00:17:21.285 Discovery Log Number of Records 2, Generation counter 2 00:17:21.285 =====Discovery Log Entry 0====== 00:17:21.285 trtype: tcp 00:17:21.285 adrfam: ipv4 00:17:21.285 subtype: current discovery subsystem 00:17:21.285 treq: not specified, sq flow control disable supported 00:17:21.285 portid: 1 00:17:21.285 trsvcid: 4420 00:17:21.285 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:21.285 traddr: 10.0.0.1 00:17:21.285 eflags: none 00:17:21.285 sectype: none 00:17:21.285 =====Discovery Log Entry 1====== 00:17:21.285 trtype: tcp 00:17:21.285 adrfam: ipv4 00:17:21.285 subtype: nvme subsystem 00:17:21.285 treq: not specified, sq flow control disable supported 00:17:21.285 portid: 1 00:17:21.285 trsvcid: 4420 00:17:21.285 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:21.285 traddr: 10.0.0.1 00:17:21.285 eflags: none 00:17:21.285 sectype: none 00:17:21.285 21:44:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:21.285 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:21.285 ===================================================== 00:17:21.285 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:21.285 ===================================================== 00:17:21.285 Controller Capabilities/Features 00:17:21.285 ================================ 00:17:21.285 Vendor ID: 0000 00:17:21.285 Subsystem Vendor ID: 0000 00:17:21.285 Serial Number: 1668bafa4ec3145e0cc6 00:17:21.285 Model Number: Linux 00:17:21.285 Firmware Version: 6.8.9-20 00:17:21.285 Recommended Arb Burst: 0 00:17:21.285 IEEE OUI Identifier: 00 00 00 00:17:21.285 Multi-path I/O 00:17:21.285 May have multiple subsystem ports: No 00:17:21.285 May have multiple controllers: No 00:17:21.285 Associated with SR-IOV VF: No 00:17:21.285 Max Data Transfer Size: Unlimited 00:17:21.285 Max Number of Namespaces: 0 00:17:21.285 Max Number of I/O Queues: 1024 00:17:21.285 NVMe Specification Version (VS): 1.3 00:17:21.285 NVMe Specification Version (Identify): 1.3 00:17:21.285 Maximum Queue Entries: 1024 00:17:21.285 Contiguous Queues Required: No 00:17:21.285 Arbitration Mechanisms Supported 00:17:21.285 Weighted Round Robin: Not Supported 00:17:21.285 Vendor Specific: Not Supported 00:17:21.285 Reset Timeout: 7500 ms 00:17:21.285 Doorbell Stride: 4 bytes 00:17:21.285 NVM Subsystem Reset: Not Supported 00:17:21.285 Command Sets Supported 00:17:21.285 NVM Command Set: Supported 00:17:21.285 Boot Partition: Not Supported 00:17:21.285 Memory Page Size Minimum: 4096 bytes 00:17:21.285 Memory Page Size Maximum: 4096 bytes 00:17:21.285 Persistent Memory Region: Not Supported 00:17:21.285 Optional Asynchronous Events Supported 00:17:21.285 Namespace Attribute Notices: Not Supported 00:17:21.285 Firmware Activation Notices: Not Supported 00:17:21.285 ANA Change Notices: Not Supported 00:17:21.285 PLE Aggregate Log Change Notices: Not Supported 00:17:21.285 LBA Status Info Alert Notices: Not Supported 00:17:21.285 EGE Aggregate Log Change Notices: Not Supported 00:17:21.285 Normal NVM Subsystem Shutdown event: Not Supported 00:17:21.285 Zone Descriptor Change Notices: Not Supported 00:17:21.285 Discovery Log Change Notices: Supported 00:17:21.285 Controller Attributes 00:17:21.285 128-bit Host Identifier: Not Supported 00:17:21.285 Non-Operational Permissive Mode: Not Supported 00:17:21.285 NVM Sets: Not Supported 00:17:21.285 Read Recovery Levels: Not Supported 00:17:21.285 Endurance Groups: Not Supported 00:17:21.285 Predictable Latency Mode: Not Supported 00:17:21.285 Traffic Based Keep ALive: Not Supported 00:17:21.285 Namespace Granularity: Not Supported 00:17:21.285 SQ Associations: Not Supported 00:17:21.285 UUID List: Not Supported 00:17:21.285 Multi-Domain Subsystem: Not Supported 00:17:21.285 Fixed Capacity Management: Not Supported 00:17:21.285 Variable Capacity Management: Not Supported 00:17:21.285 Delete Endurance Group: Not Supported 00:17:21.285 Delete NVM Set: Not Supported 00:17:21.285 Extended LBA Formats Supported: Not Supported 00:17:21.285 Flexible Data Placement Supported: Not Supported 00:17:21.285 00:17:21.285 Controller Memory Buffer Support 00:17:21.285 ================================ 00:17:21.285 Supported: No 00:17:21.285 00:17:21.285 Persistent Memory Region Support 00:17:21.285 ================================ 00:17:21.285 Supported: No 00:17:21.285 00:17:21.285 Admin Command Set Attributes 00:17:21.285 ============================ 00:17:21.285 Security Send/Receive: Not Supported 00:17:21.285 Format NVM: Not Supported 00:17:21.285 Firmware Activate/Download: Not Supported 00:17:21.285 Namespace Management: Not Supported 00:17:21.285 Device Self-Test: Not Supported 00:17:21.285 Directives: Not Supported 00:17:21.286 NVMe-MI: Not Supported 00:17:21.286 Virtualization Management: Not Supported 00:17:21.286 Doorbell Buffer Config: Not Supported 00:17:21.286 Get LBA Status Capability: Not Supported 00:17:21.286 Command & Feature Lockdown Capability: Not Supported 00:17:21.286 Abort Command Limit: 1 00:17:21.286 Async Event Request Limit: 1 00:17:21.286 Number of Firmware Slots: N/A 00:17:21.286 Firmware Slot 1 Read-Only: N/A 00:17:21.286 Firmware Activation Without Reset: N/A 00:17:21.286 Multiple Update Detection Support: N/A 00:17:21.286 Firmware Update Granularity: No Information Provided 00:17:21.286 Per-Namespace SMART Log: No 00:17:21.286 Asymmetric Namespace Access Log Page: Not Supported 00:17:21.286 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:21.286 Command Effects Log Page: Not Supported 00:17:21.286 Get Log Page Extended Data: Supported 00:17:21.286 Telemetry Log Pages: Not Supported 00:17:21.286 Persistent Event Log Pages: Not Supported 00:17:21.286 Supported Log Pages Log Page: May Support 00:17:21.286 Commands Supported & Effects Log Page: Not Supported 00:17:21.286 Feature Identifiers & Effects Log Page:May Support 00:17:21.286 NVMe-MI Commands & Effects Log Page: May Support 00:17:21.286 Data Area 4 for Telemetry Log: Not Supported 00:17:21.286 Error Log Page Entries Supported: 1 00:17:21.286 Keep Alive: Not Supported 00:17:21.286 00:17:21.286 NVM Command Set Attributes 00:17:21.286 ========================== 00:17:21.286 Submission Queue Entry Size 00:17:21.286 Max: 1 00:17:21.286 Min: 1 00:17:21.286 Completion Queue Entry Size 00:17:21.286 Max: 1 00:17:21.286 Min: 1 00:17:21.286 Number of Namespaces: 0 00:17:21.286 Compare Command: Not Supported 00:17:21.286 Write Uncorrectable Command: Not Supported 00:17:21.286 Dataset Management Command: Not Supported 00:17:21.286 Write Zeroes Command: Not Supported 00:17:21.286 Set Features Save Field: Not Supported 00:17:21.286 Reservations: Not Supported 00:17:21.286 Timestamp: Not Supported 00:17:21.286 Copy: Not Supported 00:17:21.286 Volatile Write Cache: Not Present 00:17:21.286 Atomic Write Unit (Normal): 1 00:17:21.286 Atomic Write Unit (PFail): 1 00:17:21.286 Atomic Compare & Write Unit: 1 00:17:21.286 Fused Compare & Write: Not Supported 00:17:21.286 Scatter-Gather List 00:17:21.286 SGL Command Set: Supported 00:17:21.286 SGL Keyed: Not Supported 00:17:21.286 SGL Bit Bucket Descriptor: Not Supported 00:17:21.286 SGL Metadata Pointer: Not Supported 00:17:21.286 Oversized SGL: Not Supported 00:17:21.286 SGL Metadata Address: Not Supported 00:17:21.286 SGL Offset: Supported 00:17:21.286 Transport SGL Data Block: Not Supported 00:17:21.286 Replay Protected Memory Block: Not Supported 00:17:21.286 00:17:21.286 Firmware Slot Information 00:17:21.286 ========================= 00:17:21.286 Active slot: 0 00:17:21.286 00:17:21.286 00:17:21.286 Error Log 00:17:21.286 ========= 00:17:21.286 00:17:21.286 Active Namespaces 00:17:21.286 ================= 00:17:21.286 Discovery Log Page 00:17:21.286 ================== 00:17:21.286 Generation Counter: 2 00:17:21.286 Number of Records: 2 00:17:21.286 Record Format: 0 00:17:21.286 00:17:21.286 Discovery Log Entry 0 00:17:21.286 ---------------------- 00:17:21.286 Transport Type: 3 (TCP) 00:17:21.286 Address Family: 1 (IPv4) 00:17:21.286 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:21.286 Entry Flags: 00:17:21.286 Duplicate Returned Information: 0 00:17:21.286 Explicit Persistent Connection Support for Discovery: 0 00:17:21.286 Transport Requirements: 00:17:21.286 Secure Channel: Not Specified 00:17:21.286 Port ID: 1 (0x0001) 00:17:21.286 Controller ID: 65535 (0xffff) 00:17:21.286 Admin Max SQ Size: 32 00:17:21.286 Transport Service Identifier: 4420 00:17:21.286 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:21.286 Transport Address: 10.0.0.1 00:17:21.286 Discovery Log Entry 1 00:17:21.286 ---------------------- 00:17:21.286 Transport Type: 3 (TCP) 00:17:21.286 Address Family: 1 (IPv4) 00:17:21.286 Subsystem Type: 2 (NVM Subsystem) 00:17:21.286 Entry Flags: 00:17:21.286 Duplicate Returned Information: 0 00:17:21.286 Explicit Persistent Connection Support for Discovery: 0 00:17:21.286 Transport Requirements: 00:17:21.286 Secure Channel: Not Specified 00:17:21.286 Port ID: 1 (0x0001) 00:17:21.286 Controller ID: 65535 (0xffff) 00:17:21.286 Admin Max SQ Size: 32 00:17:21.286 Transport Service Identifier: 4420 00:17:21.286 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:21.286 Transport Address: 10.0.0.1 00:17:21.286 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:21.546 get_feature(0x01) failed 00:17:21.546 get_feature(0x02) failed 00:17:21.546 get_feature(0x04) failed 00:17:21.546 ===================================================== 00:17:21.546 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:21.546 ===================================================== 00:17:21.546 Controller Capabilities/Features 00:17:21.546 ================================ 00:17:21.546 Vendor ID: 0000 00:17:21.546 Subsystem Vendor ID: 0000 00:17:21.546 Serial Number: f70164ec60ad9a4c0823 00:17:21.546 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:21.546 Firmware Version: 6.8.9-20 00:17:21.546 Recommended Arb Burst: 6 00:17:21.546 IEEE OUI Identifier: 00 00 00 00:17:21.546 Multi-path I/O 00:17:21.546 May have multiple subsystem ports: Yes 00:17:21.546 May have multiple controllers: Yes 00:17:21.546 Associated with SR-IOV VF: No 00:17:21.546 Max Data Transfer Size: Unlimited 00:17:21.546 Max Number of Namespaces: 1024 00:17:21.546 Max Number of I/O Queues: 128 00:17:21.546 NVMe Specification Version (VS): 1.3 00:17:21.546 NVMe Specification Version (Identify): 1.3 00:17:21.546 Maximum Queue Entries: 1024 00:17:21.546 Contiguous Queues Required: No 00:17:21.546 Arbitration Mechanisms Supported 00:17:21.546 Weighted Round Robin: Not Supported 00:17:21.546 Vendor Specific: Not Supported 00:17:21.546 Reset Timeout: 7500 ms 00:17:21.546 Doorbell Stride: 4 bytes 00:17:21.546 NVM Subsystem Reset: Not Supported 00:17:21.546 Command Sets Supported 00:17:21.546 NVM Command Set: Supported 00:17:21.546 Boot Partition: Not Supported 00:17:21.546 Memory Page Size Minimum: 4096 bytes 00:17:21.546 Memory Page Size Maximum: 4096 bytes 00:17:21.546 Persistent Memory Region: Not Supported 00:17:21.546 Optional Asynchronous Events Supported 00:17:21.546 Namespace Attribute Notices: Supported 00:17:21.546 Firmware Activation Notices: Not Supported 00:17:21.546 ANA Change Notices: Supported 00:17:21.546 PLE Aggregate Log Change Notices: Not Supported 00:17:21.546 LBA Status Info Alert Notices: Not Supported 00:17:21.546 EGE Aggregate Log Change Notices: Not Supported 00:17:21.546 Normal NVM Subsystem Shutdown event: Not Supported 00:17:21.546 Zone Descriptor Change Notices: Not Supported 00:17:21.546 Discovery Log Change Notices: Not Supported 00:17:21.546 Controller Attributes 00:17:21.546 128-bit Host Identifier: Supported 00:17:21.546 Non-Operational Permissive Mode: Not Supported 00:17:21.546 NVM Sets: Not Supported 00:17:21.546 Read Recovery Levels: Not Supported 00:17:21.546 Endurance Groups: Not Supported 00:17:21.546 Predictable Latency Mode: Not Supported 00:17:21.546 Traffic Based Keep ALive: Supported 00:17:21.546 Namespace Granularity: Not Supported 00:17:21.546 SQ Associations: Not Supported 00:17:21.546 UUID List: Not Supported 00:17:21.546 Multi-Domain Subsystem: Not Supported 00:17:21.546 Fixed Capacity Management: Not Supported 00:17:21.546 Variable Capacity Management: Not Supported 00:17:21.546 Delete Endurance Group: Not Supported 00:17:21.546 Delete NVM Set: Not Supported 00:17:21.546 Extended LBA Formats Supported: Not Supported 00:17:21.546 Flexible Data Placement Supported: Not Supported 00:17:21.546 00:17:21.546 Controller Memory Buffer Support 00:17:21.546 ================================ 00:17:21.546 Supported: No 00:17:21.546 00:17:21.546 Persistent Memory Region Support 00:17:21.546 ================================ 00:17:21.546 Supported: No 00:17:21.546 00:17:21.546 Admin Command Set Attributes 00:17:21.546 ============================ 00:17:21.546 Security Send/Receive: Not Supported 00:17:21.546 Format NVM: Not Supported 00:17:21.546 Firmware Activate/Download: Not Supported 00:17:21.546 Namespace Management: Not Supported 00:17:21.546 Device Self-Test: Not Supported 00:17:21.546 Directives: Not Supported 00:17:21.546 NVMe-MI: Not Supported 00:17:21.546 Virtualization Management: Not Supported 00:17:21.546 Doorbell Buffer Config: Not Supported 00:17:21.546 Get LBA Status Capability: Not Supported 00:17:21.546 Command & Feature Lockdown Capability: Not Supported 00:17:21.547 Abort Command Limit: 4 00:17:21.547 Async Event Request Limit: 4 00:17:21.547 Number of Firmware Slots: N/A 00:17:21.547 Firmware Slot 1 Read-Only: N/A 00:17:21.547 Firmware Activation Without Reset: N/A 00:17:21.547 Multiple Update Detection Support: N/A 00:17:21.547 Firmware Update Granularity: No Information Provided 00:17:21.547 Per-Namespace SMART Log: Yes 00:17:21.547 Asymmetric Namespace Access Log Page: Supported 00:17:21.547 ANA Transition Time : 10 sec 00:17:21.547 00:17:21.547 Asymmetric Namespace Access Capabilities 00:17:21.547 ANA Optimized State : Supported 00:17:21.547 ANA Non-Optimized State : Supported 00:17:21.547 ANA Inaccessible State : Supported 00:17:21.547 ANA Persistent Loss State : Supported 00:17:21.547 ANA Change State : Supported 00:17:21.547 ANAGRPID is not changed : No 00:17:21.547 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:21.547 00:17:21.547 ANA Group Identifier Maximum : 128 00:17:21.547 Number of ANA Group Identifiers : 128 00:17:21.547 Max Number of Allowed Namespaces : 1024 00:17:21.547 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:21.547 Command Effects Log Page: Supported 00:17:21.547 Get Log Page Extended Data: Supported 00:17:21.547 Telemetry Log Pages: Not Supported 00:17:21.547 Persistent Event Log Pages: Not Supported 00:17:21.547 Supported Log Pages Log Page: May Support 00:17:21.547 Commands Supported & Effects Log Page: Not Supported 00:17:21.547 Feature Identifiers & Effects Log Page:May Support 00:17:21.547 NVMe-MI Commands & Effects Log Page: May Support 00:17:21.547 Data Area 4 for Telemetry Log: Not Supported 00:17:21.547 Error Log Page Entries Supported: 128 00:17:21.547 Keep Alive: Supported 00:17:21.547 Keep Alive Granularity: 1000 ms 00:17:21.547 00:17:21.547 NVM Command Set Attributes 00:17:21.547 ========================== 00:17:21.547 Submission Queue Entry Size 00:17:21.547 Max: 64 00:17:21.547 Min: 64 00:17:21.547 Completion Queue Entry Size 00:17:21.547 Max: 16 00:17:21.547 Min: 16 00:17:21.547 Number of Namespaces: 1024 00:17:21.547 Compare Command: Not Supported 00:17:21.547 Write Uncorrectable Command: Not Supported 00:17:21.547 Dataset Management Command: Supported 00:17:21.547 Write Zeroes Command: Supported 00:17:21.547 Set Features Save Field: Not Supported 00:17:21.547 Reservations: Not Supported 00:17:21.547 Timestamp: Not Supported 00:17:21.547 Copy: Not Supported 00:17:21.547 Volatile Write Cache: Present 00:17:21.547 Atomic Write Unit (Normal): 1 00:17:21.547 Atomic Write Unit (PFail): 1 00:17:21.547 Atomic Compare & Write Unit: 1 00:17:21.547 Fused Compare & Write: Not Supported 00:17:21.547 Scatter-Gather List 00:17:21.547 SGL Command Set: Supported 00:17:21.547 SGL Keyed: Not Supported 00:17:21.547 SGL Bit Bucket Descriptor: Not Supported 00:17:21.547 SGL Metadata Pointer: Not Supported 00:17:21.547 Oversized SGL: Not Supported 00:17:21.547 SGL Metadata Address: Not Supported 00:17:21.547 SGL Offset: Supported 00:17:21.547 Transport SGL Data Block: Not Supported 00:17:21.547 Replay Protected Memory Block: Not Supported 00:17:21.547 00:17:21.547 Firmware Slot Information 00:17:21.547 ========================= 00:17:21.547 Active slot: 0 00:17:21.547 00:17:21.547 Asymmetric Namespace Access 00:17:21.547 =========================== 00:17:21.547 Change Count : 0 00:17:21.547 Number of ANA Group Descriptors : 1 00:17:21.547 ANA Group Descriptor : 0 00:17:21.547 ANA Group ID : 1 00:17:21.547 Number of NSID Values : 1 00:17:21.547 Change Count : 0 00:17:21.547 ANA State : 1 00:17:21.547 Namespace Identifier : 1 00:17:21.547 00:17:21.547 Commands Supported and Effects 00:17:21.547 ============================== 00:17:21.547 Admin Commands 00:17:21.547 -------------- 00:17:21.547 Get Log Page (02h): Supported 00:17:21.547 Identify (06h): Supported 00:17:21.547 Abort (08h): Supported 00:17:21.547 Set Features (09h): Supported 00:17:21.547 Get Features (0Ah): Supported 00:17:21.547 Asynchronous Event Request (0Ch): Supported 00:17:21.547 Keep Alive (18h): Supported 00:17:21.547 I/O Commands 00:17:21.547 ------------ 00:17:21.547 Flush (00h): Supported 00:17:21.547 Write (01h): Supported LBA-Change 00:17:21.547 Read (02h): Supported 00:17:21.547 Write Zeroes (08h): Supported LBA-Change 00:17:21.547 Dataset Management (09h): Supported 00:17:21.547 00:17:21.547 Error Log 00:17:21.547 ========= 00:17:21.547 Entry: 0 00:17:21.547 Error Count: 0x3 00:17:21.547 Submission Queue Id: 0x0 00:17:21.547 Command Id: 0x5 00:17:21.547 Phase Bit: 0 00:17:21.547 Status Code: 0x2 00:17:21.547 Status Code Type: 0x0 00:17:21.547 Do Not Retry: 1 00:17:21.547 Error Location: 0x28 00:17:21.547 LBA: 0x0 00:17:21.547 Namespace: 0x0 00:17:21.547 Vendor Log Page: 0x0 00:17:21.547 ----------- 00:17:21.547 Entry: 1 00:17:21.547 Error Count: 0x2 00:17:21.547 Submission Queue Id: 0x0 00:17:21.547 Command Id: 0x5 00:17:21.547 Phase Bit: 0 00:17:21.547 Status Code: 0x2 00:17:21.547 Status Code Type: 0x0 00:17:21.547 Do Not Retry: 1 00:17:21.547 Error Location: 0x28 00:17:21.547 LBA: 0x0 00:17:21.547 Namespace: 0x0 00:17:21.547 Vendor Log Page: 0x0 00:17:21.547 ----------- 00:17:21.547 Entry: 2 00:17:21.547 Error Count: 0x1 00:17:21.547 Submission Queue Id: 0x0 00:17:21.547 Command Id: 0x4 00:17:21.547 Phase Bit: 0 00:17:21.547 Status Code: 0x2 00:17:21.547 Status Code Type: 0x0 00:17:21.547 Do Not Retry: 1 00:17:21.547 Error Location: 0x28 00:17:21.547 LBA: 0x0 00:17:21.547 Namespace: 0x0 00:17:21.547 Vendor Log Page: 0x0 00:17:21.547 00:17:21.547 Number of Queues 00:17:21.547 ================ 00:17:21.547 Number of I/O Submission Queues: 128 00:17:21.547 Number of I/O Completion Queues: 128 00:17:21.547 00:17:21.547 ZNS Specific Controller Data 00:17:21.547 ============================ 00:17:21.547 Zone Append Size Limit: 0 00:17:21.547 00:17:21.547 00:17:21.547 Active Namespaces 00:17:21.547 ================= 00:17:21.547 get_feature(0x05) failed 00:17:21.547 Namespace ID:1 00:17:21.547 Command Set Identifier: NVM (00h) 00:17:21.547 Deallocate: Supported 00:17:21.547 Deallocated/Unwritten Error: Not Supported 00:17:21.547 Deallocated Read Value: Unknown 00:17:21.547 Deallocate in Write Zeroes: Not Supported 00:17:21.547 Deallocated Guard Field: 0xFFFF 00:17:21.547 Flush: Supported 00:17:21.547 Reservation: Not Supported 00:17:21.547 Namespace Sharing Capabilities: Multiple Controllers 00:17:21.547 Size (in LBAs): 1310720 (5GiB) 00:17:21.547 Capacity (in LBAs): 1310720 (5GiB) 00:17:21.547 Utilization (in LBAs): 1310720 (5GiB) 00:17:21.547 UUID: 49b85d5d-1f87-49ac-ad83-c6740bdc8164 00:17:21.547 Thin Provisioning: Not Supported 00:17:21.547 Per-NS Atomic Units: Yes 00:17:21.547 Atomic Boundary Size (Normal): 0 00:17:21.547 Atomic Boundary Size (PFail): 0 00:17:21.547 Atomic Boundary Offset: 0 00:17:21.547 NGUID/EUI64 Never Reused: No 00:17:21.547 ANA group ID: 1 00:17:21.547 Namespace Write Protected: No 00:17:21.547 Number of LBA Formats: 1 00:17:21.547 Current LBA Format: LBA Format #00 00:17:21.548 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:21.548 00:17:21.548 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:21.548 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:21.548 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:21.548 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.548 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:21.548 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.548 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.548 rmmod nvme_tcp 00:17:21.548 rmmod nvme_fabrics 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:21.807 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:22.067 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:22.067 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:22.067 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:22.067 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:22.067 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:22.067 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:22.067 21:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:22.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.634 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.634 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:22.892 00:17:22.892 real 0m3.101s 00:17:22.892 user 0m1.088s 00:17:22.892 sys 0m1.395s 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.892 ************************************ 00:17:22.892 END TEST nvmf_identify_kernel_target 00:17:22.892 ************************************ 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.892 ************************************ 00:17:22.892 START TEST nvmf_auth_host 00:17:22.892 ************************************ 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:22.892 * Looking for test storage... 00:17:22.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.892 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.152 --rc genhtml_branch_coverage=1 00:17:23.152 --rc genhtml_function_coverage=1 00:17:23.152 --rc genhtml_legend=1 00:17:23.152 --rc geninfo_all_blocks=1 00:17:23.152 --rc geninfo_unexecuted_blocks=1 00:17:23.152 00:17:23.152 ' 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.152 --rc genhtml_branch_coverage=1 00:17:23.152 --rc genhtml_function_coverage=1 00:17:23.152 --rc genhtml_legend=1 00:17:23.152 --rc geninfo_all_blocks=1 00:17:23.152 --rc geninfo_unexecuted_blocks=1 00:17:23.152 00:17:23.152 ' 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.152 --rc genhtml_branch_coverage=1 00:17:23.152 --rc genhtml_function_coverage=1 00:17:23.152 --rc genhtml_legend=1 00:17:23.152 --rc geninfo_all_blocks=1 00:17:23.152 --rc geninfo_unexecuted_blocks=1 00:17:23.152 00:17:23.152 ' 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.152 --rc genhtml_branch_coverage=1 00:17:23.152 --rc genhtml_function_coverage=1 00:17:23.152 --rc genhtml_legend=1 00:17:23.152 --rc geninfo_all_blocks=1 00:17:23.152 --rc geninfo_unexecuted_blocks=1 00:17:23.152 00:17:23.152 ' 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.152 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.153 Cannot find device "nvmf_init_br" 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.153 Cannot find device "nvmf_init_br2" 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.153 Cannot find device "nvmf_tgt_br" 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.153 Cannot find device "nvmf_tgt_br2" 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.153 Cannot find device "nvmf_init_br" 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.153 Cannot find device "nvmf_init_br2" 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:23.153 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.154 Cannot find device "nvmf_tgt_br" 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.154 Cannot find device "nvmf_tgt_br2" 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.154 Cannot find device "nvmf_br" 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.154 Cannot find device "nvmf_init_if" 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:23.154 Cannot find device "nvmf_init_if2" 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.154 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.154 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:23.413 21:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:23.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:17:23.413 00:17:23.413 --- 10.0.0.3 ping statistics --- 00:17:23.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.413 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:23.413 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:23.413 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:23.413 00:17:23.413 --- 10.0.0.4 ping statistics --- 00:17:23.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.413 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:23.413 00:17:23.413 --- 10.0.0.1 ping statistics --- 00:17:23.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.413 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:23.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:23.413 00:17:23.413 --- 10.0.0.2 ping statistics --- 00:17:23.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.413 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78474 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78474 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78474 ']' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.413 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6a7259bfca6cfd73db38bcef717e40ae 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jqz 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6a7259bfca6cfd73db38bcef717e40ae 0 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6a7259bfca6cfd73db38bcef717e40ae 0 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6a7259bfca6cfd73db38bcef717e40ae 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jqz 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jqz 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.jqz 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=217fb77d335ec7864b9b376364b7d464358af71d44b710bdf4253e39794af452 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:23.982 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RPU 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 217fb77d335ec7864b9b376364b7d464358af71d44b710bdf4253e39794af452 3 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 217fb77d335ec7864b9b376364b7d464358af71d44b710bdf4253e39794af452 3 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=217fb77d335ec7864b9b376364b7d464358af71d44b710bdf4253e39794af452 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RPU 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RPU 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RPU 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cf7702622a2c2c0e641e94dcdfcc82fcc70822702ffc68a5 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1cj 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cf7702622a2c2c0e641e94dcdfcc82fcc70822702ffc68a5 0 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cf7702622a2c2c0e641e94dcdfcc82fcc70822702ffc68a5 0 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cf7702622a2c2c0e641e94dcdfcc82fcc70822702ffc68a5 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1cj 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1cj 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.1cj 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=139a07db116355b12d09c71a07f71e18a1ac21039b888087 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OFv 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 139a07db116355b12d09c71a07f71e18a1ac21039b888087 2 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 139a07db116355b12d09c71a07f71e18a1ac21039b888087 2 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=139a07db116355b12d09c71a07f71e18a1ac21039b888087 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:23.983 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OFv 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OFv 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OFv 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8941bc8587541e663e342772dccdf416 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KOa 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8941bc8587541e663e342772dccdf416 1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8941bc8587541e663e342772dccdf416 1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8941bc8587541e663e342772dccdf416 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KOa 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KOa 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.KOa 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c98a745b73c1b75b0677f6c9e9045b60 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6HA 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c98a745b73c1b75b0677f6c9e9045b60 1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c98a745b73c1b75b0677f6c9e9045b60 1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c98a745b73c1b75b0677f6c9e9045b60 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6HA 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6HA 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6HA 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=735ac40b10ee59dfc9c44a4ab93eaca06bac4cee8da9279b 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Xot 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 735ac40b10ee59dfc9c44a4ab93eaca06bac4cee8da9279b 2 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 735ac40b10ee59dfc9c44a4ab93eaca06bac4cee8da9279b 2 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=735ac40b10ee59dfc9c44a4ab93eaca06bac4cee8da9279b 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:24.291 21:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Xot 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Xot 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Xot 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:24.291 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=36dc276bf8f4a2b1e363272495f617f6 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OMG 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 36dc276bf8f4a2b1e363272495f617f6 0 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 36dc276bf8f4a2b1e363272495f617f6 0 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=36dc276bf8f4a2b1e363272495f617f6 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:24.292 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OMG 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OMG 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OMG 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:24.573 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=67b026bb40fd4c645518deb63b461604ad1fa9ea1acf784a520e662b3a465623 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2Zm 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 67b026bb40fd4c645518deb63b461604ad1fa9ea1acf784a520e662b3a465623 3 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 67b026bb40fd4c645518deb63b461604ad1fa9ea1acf784a520e662b3a465623 3 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=67b026bb40fd4c645518deb63b461604ad1fa9ea1acf784a520e662b3a465623 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2Zm 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2Zm 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2Zm 00:17:24.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78474 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78474 ']' 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.574 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jqz 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RPU ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RPU 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.1cj 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OFv ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OFv 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.KOa 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6HA ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6HA 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Xot 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OMG ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OMG 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2Zm 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:24.833 21:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:25.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:25.350 Waiting for block devices as requested 00:17:25.350 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:25.350 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:25.917 No valid GPT data, bailing 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:25.917 No valid GPT data, bailing 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:25.917 No valid GPT data, bailing 00:17:25.917 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:26.176 No valid GPT data, bailing 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -a 10.0.0.1 -t tcp -s 4420 00:17:26.176 00:17:26.176 Discovery Log Number of Records 2, Generation counter 2 00:17:26.176 =====Discovery Log Entry 0====== 00:17:26.176 trtype: tcp 00:17:26.176 adrfam: ipv4 00:17:26.176 subtype: current discovery subsystem 00:17:26.176 treq: not specified, sq flow control disable supported 00:17:26.176 portid: 1 00:17:26.176 trsvcid: 4420 00:17:26.176 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:26.176 traddr: 10.0.0.1 00:17:26.176 eflags: none 00:17:26.176 sectype: none 00:17:26.176 =====Discovery Log Entry 1====== 00:17:26.176 trtype: tcp 00:17:26.176 adrfam: ipv4 00:17:26.176 subtype: nvme subsystem 00:17:26.176 treq: not specified, sq flow control disable supported 00:17:26.176 portid: 1 00:17:26.176 trsvcid: 4420 00:17:26.176 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:26.176 traddr: 10.0.0.1 00:17:26.176 eflags: none 00:17:26.176 sectype: none 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.176 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.177 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:26.177 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:26.177 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.177 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.436 21:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.436 nvme0n1 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.436 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.696 nvme0n1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.696 nvme0n1 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.696 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.956 nvme0n1 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.956 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.957 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.216 nvme0n1 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.216 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.217 21:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.475 nvme0n1 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.475 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.734 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.992 nvme0n1 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.993 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.252 nvme0n1 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.252 21:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.252 nvme0n1 00:17:28.252 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.252 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.253 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.253 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.253 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.511 nvme0n1 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.511 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.512 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.512 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.770 nvme0n1 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.770 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:28.771 21:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.707 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.708 nvme0n1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.708 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.967 nvme0n1 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.967 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.226 nvme0n1 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.226 21:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.485 nvme0n1 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.485 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.744 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.744 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.744 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.744 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.744 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.744 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.745 nvme0n1 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.745 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.003 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:31.004 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:31.004 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:31.004 21:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.908 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.168 nvme0n1 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.168 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.427 21:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.687 nvme0n1 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.687 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.688 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.285 nvme0n1 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.285 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.286 21:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.590 nvme0n1 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.590 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 nvme0n1 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.158 21:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.727 nvme0n1 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.727 21:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.294 nvme0n1 00:17:36.295 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.295 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.295 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.295 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.295 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.295 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.553 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.554 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.121 nvme0n1 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.121 21:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.687 nvme0n1 00:17:37.687 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.946 21:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.514 nvme0n1 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.514 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.772 nvme0n1 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:38.772 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.773 nvme0n1 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.773 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 nvme0n1 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.031 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.289 nvme0n1 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.289 21:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.289 nvme0n1 00:17:39.289 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.289 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.289 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.289 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.289 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.546 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.546 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.546 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.546 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.546 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.547 nvme0n1 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.547 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.806 nvme0n1 00:17:39.806 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.807 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.066 nvme0n1 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.066 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.067 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.326 nvme0n1 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.326 21:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.326 nvme0n1 00:17:40.326 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.586 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.920 nvme0n1 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:40.920 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.921 nvme0n1 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.921 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.180 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.181 nvme0n1 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.181 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.440 21:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.440 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.440 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.441 nvme0n1 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.441 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.700 nvme0n1 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.700 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.959 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.218 nvme0n1 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.218 21:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.476 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.734 nvme0n1 00:17:42.734 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.734 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.734 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.734 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.734 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.734 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.734 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.735 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.301 nvme0n1 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:43.301 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.302 21:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.560 nvme0n1 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:43.560 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.561 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.143 nvme0n1 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.143 21:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.738 nvme0n1 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.738 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.996 21:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 nvme0n1 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.498 nvme0n1 00:17:46.498 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.498 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.498 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.498 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.498 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.498 21:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:46.498 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:46.499 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:46.499 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.499 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.066 nvme0n1 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.066 21:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.000 nvme0n1 00:17:48.000 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.000 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.000 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.000 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.000 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.000 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.001 nvme0n1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.001 nvme0n1 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.001 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.259 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.260 nvme0n1 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.260 21:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.260 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.518 nvme0n1 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.518 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.776 nvme0n1 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.776 nvme0n1 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.776 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.777 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.777 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.777 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.035 nvme0n1 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.035 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.036 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.295 nvme0n1 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.295 21:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.295 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.554 nvme0n1 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.554 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.813 nvme0n1 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.813 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:49.814 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.814 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:49.814 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:49.814 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:49.814 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.814 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.814 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.073 nvme0n1 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.073 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.074 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.074 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.074 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.074 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.074 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.332 nvme0n1 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.332 21:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.590 nvme0n1 00:17:50.590 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.590 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.590 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.590 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.590 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.591 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.849 nvme0n1 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:50.849 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.850 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.110 nvme0n1 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.110 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.111 21:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.677 nvme0n1 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:51.677 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.678 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.936 nvme0n1 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.936 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.194 21:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.452 nvme0n1 00:17:52.452 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.453 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.019 nvme0n1 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.019 21:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.278 nvme0n1 00:17:53.278 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.278 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.278 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.278 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.278 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.278 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3MjU5YmZjYTZjZmQ3M2RiMzhiY2VmNzE3ZTQwYWWsYqZU: 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE3ZmI3N2QzMzVlYzc4NjRiOWIzNzYzNjRiN2Q0NjQzNThhZjcxZDQ0YjcxMGJkZjQyNTNlMzk3OTRhZjQ1MndeLwM=: 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.536 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.103 nvme0n1 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:54.103 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.104 21:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 nvme0n1 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.035 21:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.601 nvme0n1 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzM1YWM0MGIxMGVlNTlkZmM5YzQ0YTRhYjkzZWFjYTA2YmFjNGNlZThkYTkyNzlirAG/Fg==: 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZkYzI3NmJmOGY0YTJiMWUzNjMyNzI0OTVmNjE3ZjZ7wpD8: 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.601 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.167 nvme0n1 00:17:56.167 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.167 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.167 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.167 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.167 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.426 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.426 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.426 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.426 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.426 21:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjdiMDI2YmI0MGZkNGM2NDU1MThkZWI2M2I0NjE2MDRhZDFmYTllYTFhY2Y3ODRhNTIwZTY2MmIzYTQ2NTYyM6YkSW4=: 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.426 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.993 nvme0n1 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.993 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.994 request: 00:17:56.994 { 00:17:56.994 "name": "nvme0", 00:17:56.994 "trtype": "tcp", 00:17:56.994 "traddr": "10.0.0.1", 00:17:56.994 "adrfam": "ipv4", 00:17:56.994 "trsvcid": "4420", 00:17:56.994 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:56.994 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:56.994 "prchk_reftag": false, 00:17:56.994 "prchk_guard": false, 00:17:56.994 "hdgst": false, 00:17:56.994 "ddgst": false, 00:17:56.994 "allow_unrecognized_csi": false, 00:17:56.994 "method": "bdev_nvme_attach_controller", 00:17:56.994 "req_id": 1 00:17:56.994 } 00:17:56.994 Got JSON-RPC error response 00:17:56.994 response: 00:17:56.994 { 00:17:56.994 "code": -5, 00:17:56.994 "message": "Input/output error" 00:17:56.994 } 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.994 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.252 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.252 request: 00:17:57.252 { 00:17:57.252 "name": "nvme0", 00:17:57.252 "trtype": "tcp", 00:17:57.253 "traddr": "10.0.0.1", 00:17:57.253 "adrfam": "ipv4", 00:17:57.253 "trsvcid": "4420", 00:17:57.253 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:57.253 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:57.253 "prchk_reftag": false, 00:17:57.253 "prchk_guard": false, 00:17:57.253 "hdgst": false, 00:17:57.253 "ddgst": false, 00:17:57.253 "dhchap_key": "key2", 00:17:57.253 "allow_unrecognized_csi": false, 00:17:57.253 "method": "bdev_nvme_attach_controller", 00:17:57.253 "req_id": 1 00:17:57.253 } 00:17:57.253 Got JSON-RPC error response 00:17:57.253 response: 00:17:57.253 { 00:17:57.253 "code": -5, 00:17:57.253 "message": "Input/output error" 00:17:57.253 } 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.253 request: 00:17:57.253 { 00:17:57.253 "name": "nvme0", 00:17:57.253 "trtype": "tcp", 00:17:57.253 "traddr": "10.0.0.1", 00:17:57.253 "adrfam": "ipv4", 00:17:57.253 "trsvcid": "4420", 00:17:57.253 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:57.253 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:57.253 "prchk_reftag": false, 00:17:57.253 "prchk_guard": false, 00:17:57.253 "hdgst": false, 00:17:57.253 "ddgst": false, 00:17:57.253 "dhchap_key": "key1", 00:17:57.253 "dhchap_ctrlr_key": "ckey2", 00:17:57.253 "allow_unrecognized_csi": false, 00:17:57.253 "method": "bdev_nvme_attach_controller", 00:17:57.253 "req_id": 1 00:17:57.253 } 00:17:57.253 Got JSON-RPC error response 00:17:57.253 response: 00:17:57.253 { 00:17:57.253 "code": -5, 00:17:57.253 "message": "Input/output error" 00:17:57.253 } 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.253 21:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.512 nvme0n1 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.512 request: 00:17:57.512 { 00:17:57.512 "name": "nvme0", 00:17:57.512 "dhchap_key": "key1", 00:17:57.512 "dhchap_ctrlr_key": "ckey2", 00:17:57.512 "method": "bdev_nvme_set_keys", 00:17:57.512 "req_id": 1 00:17:57.512 } 00:17:57.512 Got JSON-RPC error response 00:17:57.512 response: 00:17:57.512 { 00:17:57.512 "code": -5, 00:17:57.512 "message": "Input/output error" 00:17:57.512 } 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:57.512 21:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NzAyNjIyYTJjMmMwZTY0MWU5NGRjZGZjYzgyZmNjNzA4MjI3MDJmZmM2OGE1FaoRFA==: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTM5YTA3ZGIxMTYzNTViMTJkMDljNzFhMDdmNzFlMThhMWFjMjEwMzliODg4MDg382/Ulw==: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.890 nvme0n1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODk0MWJjODU4NzU0MWU2NjNlMzQyNzcyZGNjZGY0MTb2vm10: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yzk4YTc0NWI3M2MxYjc1YjA2NzdmNmM5ZTkwNDViNjBYYDAp: 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.890 request: 00:17:58.890 { 00:17:58.890 "name": "nvme0", 00:17:58.890 "dhchap_key": "key2", 00:17:58.890 "dhchap_ctrlr_key": "ckey1", 00:17:58.890 "method": "bdev_nvme_set_keys", 00:17:58.890 "req_id": 1 00:17:58.890 } 00:17:58.890 Got JSON-RPC error response 00:17:58.890 response: 00:17:58.890 { 00:17:58.890 "code": -13, 00:17:58.890 "message": "Permission denied" 00:17:58.890 } 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:58.890 21:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.823 rmmod nvme_tcp 00:17:59.823 rmmod nvme_fabrics 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78474 ']' 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78474 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78474 ']' 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78474 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.823 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78474 00:18:00.081 killing process with pid 78474 00:18:00.081 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.081 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.081 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78474' 00:18:00.081 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78474 00:18:00.081 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78474 00:18:00.081 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.082 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:00.340 21:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:00.340 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:00.340 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:00.340 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:00.340 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:00.340 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:00.340 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:00.340 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:00.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:01.164 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:01.164 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:01.164 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.jqz /tmp/spdk.key-null.1cj /tmp/spdk.key-sha256.KOa /tmp/spdk.key-sha384.Xot /tmp/spdk.key-sha512.2Zm /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:01.164 21:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:01.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:01.422 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:01.422 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:01.681 00:18:01.681 real 0m38.739s 00:18:01.681 user 0m34.541s 00:18:01.681 sys 0m3.582s 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.681 ************************************ 00:18:01.681 END TEST nvmf_auth_host 00:18:01.681 ************************************ 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.681 ************************************ 00:18:01.681 START TEST nvmf_digest 00:18:01.681 ************************************ 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:01.681 * Looking for test storage... 00:18:01.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:01.681 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:01.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.682 --rc genhtml_branch_coverage=1 00:18:01.682 --rc genhtml_function_coverage=1 00:18:01.682 --rc genhtml_legend=1 00:18:01.682 --rc geninfo_all_blocks=1 00:18:01.682 --rc geninfo_unexecuted_blocks=1 00:18:01.682 00:18:01.682 ' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:01.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.682 --rc genhtml_branch_coverage=1 00:18:01.682 --rc genhtml_function_coverage=1 00:18:01.682 --rc genhtml_legend=1 00:18:01.682 --rc geninfo_all_blocks=1 00:18:01.682 --rc geninfo_unexecuted_blocks=1 00:18:01.682 00:18:01.682 ' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:01.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.682 --rc genhtml_branch_coverage=1 00:18:01.682 --rc genhtml_function_coverage=1 00:18:01.682 --rc genhtml_legend=1 00:18:01.682 --rc geninfo_all_blocks=1 00:18:01.682 --rc geninfo_unexecuted_blocks=1 00:18:01.682 00:18:01.682 ' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:01.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.682 --rc genhtml_branch_coverage=1 00:18:01.682 --rc genhtml_function_coverage=1 00:18:01.682 --rc genhtml_legend=1 00:18:01.682 --rc geninfo_all_blocks=1 00:18:01.682 --rc geninfo_unexecuted_blocks=1 00:18:01.682 00:18:01.682 ' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:01.682 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:01.682 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:01.940 Cannot find device "nvmf_init_br" 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:01.940 Cannot find device "nvmf_init_br2" 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:01.940 Cannot find device "nvmf_tgt_br" 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.940 Cannot find device "nvmf_tgt_br2" 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:01.940 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:01.940 Cannot find device "nvmf_init_br" 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:01.941 Cannot find device "nvmf_init_br2" 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:01.941 Cannot find device "nvmf_tgt_br" 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:01.941 Cannot find device "nvmf_tgt_br2" 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:01.941 Cannot find device "nvmf_br" 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:01.941 Cannot find device "nvmf_init_if" 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:01.941 Cannot find device "nvmf_init_if2" 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.941 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:02.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:18:02.200 00:18:02.200 --- 10.0.0.3 ping statistics --- 00:18:02.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.200 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:02.200 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:02.200 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:18:02.200 00:18:02.200 --- 10.0.0.4 ping statistics --- 00:18:02.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.200 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:02.200 00:18:02.200 --- 10.0.0.1 ping statistics --- 00:18:02.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.200 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:02.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:18:02.200 00:18:02.200 --- 10.0.0.2 ping statistics --- 00:18:02.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.200 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:02.200 ************************************ 00:18:02.200 START TEST nvmf_digest_clean 00:18:02.200 ************************************ 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80141 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80141 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80141 ']' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.200 21:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.200 [2024-12-10 21:45:02.910009] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:02.200 [2024-12-10 21:45:02.910118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.459 [2024-12-10 21:45:03.071621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.459 [2024-12-10 21:45:03.112426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.459 [2024-12-10 21:45:03.112517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.459 [2024-12-10 21:45:03.112532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.459 [2024-12-10 21:45:03.112542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.459 [2024-12-10 21:45:03.112551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.459 [2024-12-10 21:45:03.112929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.459 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.459 [2024-12-10 21:45:03.231923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.717 null0 00:18:02.717 [2024-12-10 21:45:03.271614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.717 [2024-12-10 21:45:03.295780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80161 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80161 /var/tmp/bperf.sock 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80161 ']' 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.717 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.718 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.718 [2024-12-10 21:45:03.372438] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:02.718 [2024-12-10 21:45:03.372593] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80161 ] 00:18:02.976 [2024-12-10 21:45:03.543910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.976 [2024-12-10 21:45:03.601847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.976 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.976 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:02.976 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:02.976 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:02.976 21:45:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:03.541 [2024-12-10 21:45:04.041504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.541 21:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.541 21:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.798 nvme0n1 00:18:03.798 21:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:03.798 21:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:04.057 Running I/O for 2 seconds... 00:18:06.367 13970.00 IOPS, 54.57 MiB/s [2024-12-10T21:45:07.150Z] 13478.50 IOPS, 52.65 MiB/s 00:18:06.367 Latency(us) 00:18:06.367 [2024-12-10T21:45:07.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.367 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:06.367 nvme0n1 : 2.01 13502.94 52.75 0.00 0.00 9472.29 2546.97 25380.31 00:18:06.367 [2024-12-10T21:45:07.150Z] =================================================================================================================== 00:18:06.367 [2024-12-10T21:45:07.150Z] Total : 13502.94 52.75 0.00 0.00 9472.29 2546.97 25380.31 00:18:06.367 { 00:18:06.367 "results": [ 00:18:06.367 { 00:18:06.367 "job": "nvme0n1", 00:18:06.367 "core_mask": "0x2", 00:18:06.367 "workload": "randread", 00:18:06.367 "status": "finished", 00:18:06.367 "queue_depth": 128, 00:18:06.367 "io_size": 4096, 00:18:06.367 "runtime": 2.005859, 00:18:06.367 "iops": 13502.943128106213, 00:18:06.367 "mibps": 52.745871594164896, 00:18:06.367 "io_failed": 0, 00:18:06.367 "io_timeout": 0, 00:18:06.367 "avg_latency_us": 9472.285854565593, 00:18:06.367 "min_latency_us": 2546.9672727272728, 00:18:06.367 "max_latency_us": 25380.305454545454 00:18:06.367 } 00:18:06.367 ], 00:18:06.367 "core_count": 1 00:18:06.367 } 00:18:06.367 21:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:06.367 21:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:06.367 21:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:06.367 21:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:06.367 | select(.opcode=="crc32c") 00:18:06.367 | "\(.module_name) \(.executed)"' 00:18:06.367 21:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80161 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80161 ']' 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80161 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.367 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80161 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.626 killing process with pid 80161 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80161' 00:18:06.626 Received shutdown signal, test time was about 2.000000 seconds 00:18:06.626 00:18:06.626 Latency(us) 00:18:06.626 [2024-12-10T21:45:07.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.626 [2024-12-10T21:45:07.409Z] =================================================================================================================== 00:18:06.626 [2024-12-10T21:45:07.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80161 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80161 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80214 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80214 /var/tmp/bperf.sock 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80214 ']' 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.626 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:06.626 [2024-12-10 21:45:07.356566] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:06.626 [2024-12-10 21:45:07.356660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80214 ] 00:18:06.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:06.626 Zero copy mechanism will not be used. 00:18:06.884 [2024-12-10 21:45:07.508885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.884 [2024-12-10 21:45:07.554919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.884 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.884 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:06.884 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:06.884 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:06.884 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:07.142 [2024-12-10 21:45:07.870971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.142 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.142 21:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.708 nvme0n1 00:18:07.708 21:45:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:07.708 21:45:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:07.708 Zero copy mechanism will not be used. 00:18:07.708 Running I/O for 2 seconds... 00:18:10.015 6768.00 IOPS, 846.00 MiB/s [2024-12-10T21:45:10.798Z] 6800.00 IOPS, 850.00 MiB/s 00:18:10.015 Latency(us) 00:18:10.015 [2024-12-10T21:45:10.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.015 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:10.015 nvme0n1 : 2.00 6797.55 849.69 0.00 0.00 2350.04 2085.24 9770.82 00:18:10.015 [2024-12-10T21:45:10.798Z] =================================================================================================================== 00:18:10.015 [2024-12-10T21:45:10.798Z] Total : 6797.55 849.69 0.00 0.00 2350.04 2085.24 9770.82 00:18:10.015 { 00:18:10.015 "results": [ 00:18:10.015 { 00:18:10.015 "job": "nvme0n1", 00:18:10.015 "core_mask": "0x2", 00:18:10.015 "workload": "randread", 00:18:10.015 "status": "finished", 00:18:10.015 "queue_depth": 16, 00:18:10.015 "io_size": 131072, 00:18:10.015 "runtime": 2.003076, 00:18:10.015 "iops": 6797.545375212922, 00:18:10.015 "mibps": 849.6931719016153, 00:18:10.015 "io_failed": 0, 00:18:10.015 "io_timeout": 0, 00:18:10.015 "avg_latency_us": 2350.0378848413634, 00:18:10.015 "min_latency_us": 2085.2363636363634, 00:18:10.015 "max_latency_us": 9770.821818181817 00:18:10.015 } 00:18:10.015 ], 00:18:10.015 "core_count": 1 00:18:10.015 } 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:10.015 | select(.opcode=="crc32c") 00:18:10.015 | "\(.module_name) \(.executed)"' 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80214 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80214 ']' 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80214 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80214 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.015 killing process with pid 80214 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80214' 00:18:10.015 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80214 00:18:10.015 Received shutdown signal, test time was about 2.000000 seconds 00:18:10.015 00:18:10.015 Latency(us) 00:18:10.015 [2024-12-10T21:45:10.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.016 [2024-12-10T21:45:10.799Z] =================================================================================================================== 00:18:10.016 [2024-12-10T21:45:10.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.016 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80214 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80267 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80267 /var/tmp/bperf.sock 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80267 ']' 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.274 21:45:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:10.274 [2024-12-10 21:45:10.995121] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:10.274 [2024-12-10 21:45:10.995243] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80267 ] 00:18:10.532 [2024-12-10 21:45:11.138031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.532 [2024-12-10 21:45:11.173135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.532 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.532 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:10.532 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:10.532 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:10.532 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:10.791 [2024-12-10 21:45:11.565101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:11.049 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.049 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.307 nvme0n1 00:18:11.307 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:11.307 21:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:11.565 Running I/O for 2 seconds... 00:18:13.432 14479.00 IOPS, 56.56 MiB/s [2024-12-10T21:45:14.215Z] 14732.50 IOPS, 57.55 MiB/s 00:18:13.432 Latency(us) 00:18:13.432 [2024-12-10T21:45:14.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.432 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.432 nvme0n1 : 2.00 14774.32 57.71 0.00 0.00 8654.90 7477.06 17992.61 00:18:13.432 [2024-12-10T21:45:14.215Z] =================================================================================================================== 00:18:13.432 [2024-12-10T21:45:14.215Z] Total : 14774.32 57.71 0.00 0.00 8654.90 7477.06 17992.61 00:18:13.432 { 00:18:13.432 "results": [ 00:18:13.432 { 00:18:13.432 "job": "nvme0n1", 00:18:13.432 "core_mask": "0x2", 00:18:13.432 "workload": "randwrite", 00:18:13.432 "status": "finished", 00:18:13.432 "queue_depth": 128, 00:18:13.432 "io_size": 4096, 00:18:13.432 "runtime": 2.003002, 00:18:13.432 "iops": 14774.32374006616, 00:18:13.432 "mibps": 57.71220210963344, 00:18:13.432 "io_failed": 0, 00:18:13.432 "io_timeout": 0, 00:18:13.432 "avg_latency_us": 8654.900608804908, 00:18:13.432 "min_latency_us": 7477.061818181818, 00:18:13.432 "max_latency_us": 17992.61090909091 00:18:13.432 } 00:18:13.432 ], 00:18:13.432 "core_count": 1 00:18:13.432 } 00:18:13.432 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:13.432 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:13.432 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:13.432 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:13.432 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:13.432 | select(.opcode=="crc32c") 00:18:13.432 | "\(.module_name) \(.executed)"' 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80267 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80267 ']' 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80267 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80267 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:13.690 killing process with pid 80267 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80267' 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80267 00:18:13.690 Received shutdown signal, test time was about 2.000000 seconds 00:18:13.690 00:18:13.690 Latency(us) 00:18:13.690 [2024-12-10T21:45:14.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.690 [2024-12-10T21:45:14.473Z] =================================================================================================================== 00:18:13.690 [2024-12-10T21:45:14.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.690 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80267 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80321 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80321 /var/tmp/bperf.sock 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80321 ']' 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.948 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:13.948 [2024-12-10 21:45:14.656843] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:13.948 [2024-12-10 21:45:14.656942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80321 ] 00:18:13.948 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:13.948 Zero copy mechanism will not be used. 00:18:14.206 [2024-12-10 21:45:14.809651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.206 [2024-12-10 21:45:14.868629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.206 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.206 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:14.206 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:14.206 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:14.206 21:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:14.463 [2024-12-10 21:45:15.231451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.771 21:45:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:14.771 21:45:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.030 nvme0n1 00:18:15.030 21:45:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:15.030 21:45:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:15.030 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:15.030 Zero copy mechanism will not be used. 00:18:15.030 Running I/O for 2 seconds... 00:18:17.337 6155.00 IOPS, 769.38 MiB/s [2024-12-10T21:45:18.120Z] 6194.00 IOPS, 774.25 MiB/s 00:18:17.337 Latency(us) 00:18:17.337 [2024-12-10T21:45:18.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.337 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:17.337 nvme0n1 : 2.00 6191.62 773.95 0.00 0.00 2578.05 1861.82 5570.56 00:18:17.337 [2024-12-10T21:45:18.120Z] =================================================================================================================== 00:18:17.337 [2024-12-10T21:45:18.120Z] Total : 6191.62 773.95 0.00 0.00 2578.05 1861.82 5570.56 00:18:17.337 { 00:18:17.337 "results": [ 00:18:17.337 { 00:18:17.337 "job": "nvme0n1", 00:18:17.337 "core_mask": "0x2", 00:18:17.337 "workload": "randwrite", 00:18:17.337 "status": "finished", 00:18:17.337 "queue_depth": 16, 00:18:17.337 "io_size": 131072, 00:18:17.337 "runtime": 2.004485, 00:18:17.337 "iops": 6191.615302683732, 00:18:17.337 "mibps": 773.9519128354665, 00:18:17.337 "io_failed": 0, 00:18:17.337 "io_timeout": 0, 00:18:17.337 "avg_latency_us": 2578.045280066803, 00:18:17.337 "min_latency_us": 1861.8181818181818, 00:18:17.337 "max_latency_us": 5570.56 00:18:17.337 } 00:18:17.337 ], 00:18:17.337 "core_count": 1 00:18:17.337 } 00:18:17.337 21:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:17.337 21:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:17.337 21:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:17.337 21:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:17.337 | select(.opcode=="crc32c") 00:18:17.337 | "\(.module_name) \(.executed)"' 00:18:17.337 21:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80321 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80321 ']' 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80321 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80321 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80321' 00:18:17.337 killing process with pid 80321 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80321 00:18:17.337 Received shutdown signal, test time was about 2.000000 seconds 00:18:17.337 00:18:17.337 Latency(us) 00:18:17.337 [2024-12-10T21:45:18.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.337 [2024-12-10T21:45:18.120Z] =================================================================================================================== 00:18:17.337 [2024-12-10T21:45:18.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.337 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80321 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80141 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80141 ']' 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80141 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80141 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.595 killing process with pid 80141 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80141' 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80141 00:18:17.595 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80141 00:18:17.854 00:18:17.854 real 0m15.571s 00:18:17.854 user 0m31.050s 00:18:17.854 sys 0m4.330s 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:17.855 ************************************ 00:18:17.855 END TEST nvmf_digest_clean 00:18:17.855 ************************************ 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:17.855 ************************************ 00:18:17.855 START TEST nvmf_digest_error 00:18:17.855 ************************************ 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80395 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80395 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80395 ']' 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.855 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:17.855 [2024-12-10 21:45:18.513006] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:17.855 [2024-12-10 21:45:18.513095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.113 [2024-12-10 21:45:18.675681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.113 [2024-12-10 21:45:18.721983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.113 [2024-12-10 21:45:18.722057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.113 [2024-12-10 21:45:18.722073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.113 [2024-12-10 21:45:18.722085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.113 [2024-12-10 21:45:18.722096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.113 [2024-12-10 21:45:18.722628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.113 [2024-12-10 21:45:18.835119] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.113 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.113 [2024-12-10 21:45:18.872138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:18.371 null0 00:18:18.371 [2024-12-10 21:45:18.907582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.371 [2024-12-10 21:45:18.931711] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80421 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80421 /var/tmp/bperf.sock 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80421 ']' 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:18.371 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:18.372 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:18.372 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.372 21:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.372 [2024-12-10 21:45:19.001229] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:18.372 [2024-12-10 21:45:19.001353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80421 ] 00:18:18.629 [2024-12-10 21:45:19.199068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.629 [2024-12-10 21:45:19.247857] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.629 [2024-12-10 21:45:19.291401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:19.562 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.562 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:19.562 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:19.562 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:19.820 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:19.820 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.820 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.820 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.820 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:19.820 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:20.078 nvme0n1 00:18:20.078 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:20.078 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.078 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.078 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.078 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:20.078 21:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:20.337 Running I/O for 2 seconds... 00:18:20.337 [2024-12-10 21:45:20.965507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:20.965611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:20.965642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:20.983812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:20.983870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:20.983886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:21.001790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:21.001850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:21.001865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:21.019860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:21.019922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:21.019939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:21.037930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:21.038020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:21.038036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:21.056498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:21.056560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:21.056574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:21.074345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:21.074388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:21.074401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:21.092085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:21.092129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:21.092142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.337 [2024-12-10 21:45:21.109854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.337 [2024-12-10 21:45:21.109894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.337 [2024-12-10 21:45:21.109907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.595 [2024-12-10 21:45:21.127581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.595 [2024-12-10 21:45:21.127622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.595 [2024-12-10 21:45:21.127635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.595 [2024-12-10 21:45:21.145366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.595 [2024-12-10 21:45:21.145409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.595 [2024-12-10 21:45:21.145423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.595 [2024-12-10 21:45:21.163459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.595 [2024-12-10 21:45:21.163501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.163514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.181579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.181625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.181640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.199567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.199611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.199625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.217440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.217512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.217527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.235677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.235734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.235749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.254210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.254264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.254279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.272308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.272359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.272374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.290097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.290138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.290152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.307973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.308023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.308038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.326179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.326224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.326238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.343979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.344020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.344033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.596 [2024-12-10 21:45:21.361797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.596 [2024-12-10 21:45:21.361837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.596 [2024-12-10 21:45:21.361850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.379597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.379633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.379646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.397609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.397674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.397689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.416144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.416218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.416234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.434162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.434211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.434224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.452815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.452868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.452883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.470663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.470708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.470722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.488440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.488489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.488502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.506181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.506221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.523894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.523933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.523946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.541820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.541864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.541877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.854 [2024-12-10 21:45:21.560153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.854 [2024-12-10 21:45:21.560195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.854 [2024-12-10 21:45:21.560208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.855 [2024-12-10 21:45:21.578051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.855 [2024-12-10 21:45:21.578104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.855 [2024-12-10 21:45:21.578118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.855 [2024-12-10 21:45:21.596435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.855 [2024-12-10 21:45:21.596508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.855 [2024-12-10 21:45:21.596522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.855 [2024-12-10 21:45:21.614298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.855 [2024-12-10 21:45:21.614337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.855 [2024-12-10 21:45:21.614350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:20.855 [2024-12-10 21:45:21.632070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:20.855 [2024-12-10 21:45:21.632114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:20.855 [2024-12-10 21:45:21.632127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.649965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.650009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.650024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.668431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.668485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.668499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.688432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.688520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.688548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.706548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.706596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.706609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.724314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.724359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.724373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.742327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.742374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.742388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.760480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.760526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.760540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.778439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.778490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.778505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.796339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.796381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.796395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.814214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.814281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.814296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.833432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.833515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.833530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.851237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.851279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.851293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.868972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.869013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.869026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.113 [2024-12-10 21:45:21.886947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.113 [2024-12-10 21:45:21.886990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.113 [2024-12-10 21:45:21.887004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.371 [2024-12-10 21:45:21.905494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:21.905548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:21.905562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:21.924541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:21.924586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:21.924600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 13789.00 IOPS, 53.86 MiB/s [2024-12-10T21:45:22.155Z] [2024-12-10 21:45:21.946732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:21.946776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:21.946790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:21.967330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:21.967401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:21.967415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:21.985297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:21.985347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:21.985362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.003088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.003150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.003164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.020846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.020890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.020904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.039706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.039765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.039780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.058777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.058877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.058900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.077200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.077257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.077272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.095070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.095117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.095131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.120639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.120693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.120707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.372 [2024-12-10 21:45:22.138325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.372 [2024-12-10 21:45:22.138366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.372 [2024-12-10 21:45:22.138380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.156064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.156106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.156119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.173831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.173891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.173906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.192394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.192507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.192534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.210504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.210555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.210570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.228379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.228425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.228439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.246186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.246235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.246249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.263999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.264042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.264056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.281839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.281883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.281897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.630 [2024-12-10 21:45:22.299594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.630 [2024-12-10 21:45:22.299635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.630 [2024-12-10 21:45:22.299649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.631 [2024-12-10 21:45:22.317272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.631 [2024-12-10 21:45:22.317313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.631 [2024-12-10 21:45:22.317326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.631 [2024-12-10 21:45:22.335071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.631 [2024-12-10 21:45:22.335113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.631 [2024-12-10 21:45:22.335126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.631 [2024-12-10 21:45:22.352769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.631 [2024-12-10 21:45:22.352807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.631 [2024-12-10 21:45:22.352820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.631 [2024-12-10 21:45:22.370416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.631 [2024-12-10 21:45:22.370466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.631 [2024-12-10 21:45:22.370480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.631 [2024-12-10 21:45:22.388155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.631 [2024-12-10 21:45:22.388194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.631 [2024-12-10 21:45:22.388207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.631 [2024-12-10 21:45:22.405858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.631 [2024-12-10 21:45:22.405902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.631 [2024-12-10 21:45:22.405916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.423841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.423907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.423931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.441915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.441989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.442005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.460127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.460182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.460197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.478514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.478591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.478610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.496582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.496645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.496660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.514529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.514581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.514595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.532516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.532566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.532580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.550305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.550361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.550375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.568058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.568102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.568115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.586137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.586220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.586238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.606335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.606428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.606472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.625066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.625114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.625129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.642967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.643017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.643031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.889 [2024-12-10 21:45:22.660826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:21.889 [2024-12-10 21:45:22.660881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.889 [2024-12-10 21:45:22.660896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.678781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.678838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.678853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.697061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.697126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.697140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.714879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.714924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.714938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.732734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.732776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.732791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.750490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.750533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.768289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.768336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.768350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.786044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.786085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.786098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.803817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.803857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.803870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.822531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.822614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.822636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.840945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.841014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.841030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.858836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.858881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.858895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.876623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.876670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.876684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.894416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.894475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.894490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.148 [2024-12-10 21:45:22.912453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.148 [2024-12-10 21:45:22.912504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.148 [2024-12-10 21:45:22.912518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.407 [2024-12-10 21:45:22.930999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1810950) 00:18:22.407 [2024-12-10 21:45:22.931062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.407 [2024-12-10 21:45:22.931078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.407 13915.50 IOPS, 54.36 MiB/s 00:18:22.407 Latency(us) 00:18:22.407 [2024-12-10T21:45:23.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.407 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:22.407 nvme0n1 : 2.01 13920.81 54.38 0.00 0.00 9187.67 8400.52 34317.03 00:18:22.407 [2024-12-10T21:45:23.190Z] =================================================================================================================== 00:18:22.407 [2024-12-10T21:45:23.190Z] Total : 13920.81 54.38 0.00 0.00 9187.67 8400.52 34317.03 00:18:22.407 { 00:18:22.407 "results": [ 00:18:22.407 { 00:18:22.407 "job": "nvme0n1", 00:18:22.407 "core_mask": "0x2", 00:18:22.407 "workload": "randread", 00:18:22.407 "status": "finished", 00:18:22.407 "queue_depth": 128, 00:18:22.407 "io_size": 4096, 00:18:22.407 "runtime": 2.008432, 00:18:22.407 "iops": 13920.809865606603, 00:18:22.407 "mibps": 54.37816353752579, 00:18:22.407 "io_failed": 0, 00:18:22.407 "io_timeout": 0, 00:18:22.407 "avg_latency_us": 9187.669918224414, 00:18:22.407 "min_latency_us": 8400.523636363636, 00:18:22.407 "max_latency_us": 34317.03272727273 00:18:22.407 } 00:18:22.407 ], 00:18:22.407 "core_count": 1 00:18:22.407 } 00:18:22.407 21:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:22.407 21:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:22.407 21:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:22.407 21:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:22.407 | .driver_specific 00:18:22.407 | .nvme_error 00:18:22.407 | .status_code 00:18:22.407 | .command_transient_transport_error' 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 109 > 0 )) 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80421 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80421 ']' 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80421 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80421 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:22.665 killing process with pid 80421 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80421' 00:18:22.665 Received shutdown signal, test time was about 2.000000 seconds 00:18:22.665 00:18:22.665 Latency(us) 00:18:22.665 [2024-12-10T21:45:23.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.665 [2024-12-10T21:45:23.448Z] =================================================================================================================== 00:18:22.665 [2024-12-10T21:45:23.448Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80421 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80421 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80480 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80480 /var/tmp/bperf.sock 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80480 ']' 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.665 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:22.923 [2024-12-10 21:45:23.490703] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:22.923 [2024-12-10 21:45:23.490789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80480 ] 00:18:22.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:22.923 Zero copy mechanism will not be used. 00:18:22.923 [2024-12-10 21:45:23.629833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.923 [2024-12-10 21:45:23.661953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.923 [2024-12-10 21:45:23.693165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:23.181 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.181 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:23.181 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:23.181 21:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:23.439 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:23.439 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.439 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:23.439 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.439 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.439 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:23.697 nvme0n1 00:18:23.697 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:23.697 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.697 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:23.697 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.697 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:23.697 21:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:23.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:23.956 Zero copy mechanism will not be used. 00:18:23.956 Running I/O for 2 seconds... 00:18:23.956 [2024-12-10 21:45:24.600415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.600480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.600497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.604908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.604950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.604964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.609488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.609524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.609537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.614055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.614102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.614118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.618599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.618638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.618651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.623116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.623161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.623175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.627711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.627750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.627764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.632186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.632224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.632237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.636612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.636651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.636665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.641174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.641213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.641226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.645619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.645658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.645678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.650135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.956 [2024-12-10 21:45:24.650173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.956 [2024-12-10 21:45:24.650187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.956 [2024-12-10 21:45:24.654626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.654667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.654681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.659219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.659257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.659271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.663719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.663765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.663780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.668151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.668190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.668204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.672547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.672586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.672600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.677004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.677043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.677057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.681500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.681537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.681550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.685901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.685941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.685955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.690659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.690696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.690711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.695567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.695607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.695621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.700509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.700546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.700561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.704944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.704984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.704998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.709457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.709497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.709510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.713950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.713988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.714001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.718492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.718529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.718543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.722968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.723006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.723020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.727371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.727406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.727420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.731878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.731918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.731931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.957 [2024-12-10 21:45:24.736310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:23.957 [2024-12-10 21:45:24.736347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.957 [2024-12-10 21:45:24.736360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.740815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.740854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.740868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.745349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.745384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.745397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.749726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.749762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.749775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.754184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.754221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.754235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.758653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.758688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.758702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.763098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.763144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.763159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.767601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.767636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.767650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.772084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.772121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.772135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.776478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.776512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.776525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.780927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.780964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.780978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.785360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.785396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.785409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.789848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.789884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.789897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.232 [2024-12-10 21:45:24.794187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.232 [2024-12-10 21:45:24.794229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.232 [2024-12-10 21:45:24.794243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.798642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.798677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.798691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.803078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.803115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.803129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.807570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.807605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.807618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.812027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.812064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.812078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.816454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.816489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.816502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.820886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.820923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.820936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.825308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.825345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.825358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.829758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.829794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.829807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.834072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.834112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.834126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.838510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.838546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.838559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.842893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.842928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.842942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.847325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.847361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.847374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.851818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.851854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.851868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.856145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.856184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.856198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.860532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.860569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.860582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.864882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.864917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.864931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.869323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.869360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.869373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.873810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.873847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.873861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.233 [2024-12-10 21:45:24.878191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.233 [2024-12-10 21:45:24.878229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.233 [2024-12-10 21:45:24.878242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.882516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.882551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.882564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.886878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.886915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.886928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.891333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.891380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.891393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.895774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.895809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.895823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.900197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.900234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.900248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.904543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.904580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.904593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.908986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.909022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.909035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.913419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.913470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.913484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.917899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.917935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.917949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.922299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.922336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.922350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.926735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.926771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.926785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.931051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.931087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.931100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.935440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.935489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.935502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.939845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.939886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.939907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.944312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.944348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.944361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.948722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.948758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.948772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.953080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.953117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.953130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.957474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.957509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.957523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.234 [2024-12-10 21:45:24.961883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.234 [2024-12-10 21:45:24.961919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.234 [2024-12-10 21:45:24.961932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.966341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.966378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.966392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.970826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.970870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.970900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.975752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.975793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.975807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.980322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.980360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.980374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.984880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.984917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.984930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.989396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.989434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.989461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.993891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.993927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.993940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:24.998199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:24.998235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:24.998248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:25.002598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:25.002635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:25.002649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:25.007046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:25.007082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:25.007095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.235 [2024-12-10 21:45:25.011484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.235 [2024-12-10 21:45:25.011519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.235 [2024-12-10 21:45:25.011532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.015956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.015991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.016005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.020266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.020301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.020314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.024650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.024684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.024697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.029074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.029109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.029122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.033492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.033526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.033540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.038147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.038185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.038198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.042650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.042693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.042705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.047169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.047205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.047218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.051576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.051611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.051624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.056010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.056053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.056066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.060457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.060491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.060503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.064938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.064972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.064985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.069264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.069298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.069311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.073627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.073661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.073674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.078055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.078090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.078103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.082513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.082547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.082560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.086948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.501 [2024-12-10 21:45:25.086983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.501 [2024-12-10 21:45:25.086996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.501 [2024-12-10 21:45:25.091311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.091345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.091358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.095725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.095762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.095775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.100150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.100186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.100199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.104431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.104477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.104491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.108763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.108798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.108811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.113214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.113254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.113268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.117621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.117655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.117668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.122173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.122209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.122222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.126595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.126630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.126643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.131055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.131090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.131103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.135516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.135550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.135563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.139979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.140014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.140027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.144367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.144405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.144418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.148812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.148848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.148861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.153236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.153271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.153284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.157636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.157670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.157682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.162133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.162174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.162187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.166644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.166680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.166693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.171073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.171109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.171124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.175465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.175500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.175513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.179992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.180030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.180043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.184599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.184644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.184658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.189046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.189083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.189097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.193478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.193514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.193528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.197959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.197996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.198010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.202501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.202538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.202552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.206890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.206928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.206941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.211371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.211407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.211421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.215924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.215962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.215975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.220408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.220457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.220473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.224777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.224816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.224829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.229186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.229225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.229243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.233753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.233791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.233805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.238359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.238396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.238410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.242777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.242815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.242829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.502 [2024-12-10 21:45:25.247382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.502 [2024-12-10 21:45:25.247420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.502 [2024-12-10 21:45:25.247434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.503 [2024-12-10 21:45:25.251876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.503 [2024-12-10 21:45:25.251914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.503 [2024-12-10 21:45:25.251929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.503 [2024-12-10 21:45:25.256386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.503 [2024-12-10 21:45:25.256423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.503 [2024-12-10 21:45:25.256436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.503 [2024-12-10 21:45:25.260836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.503 [2024-12-10 21:45:25.260873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.503 [2024-12-10 21:45:25.260886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.503 [2024-12-10 21:45:25.265430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.503 [2024-12-10 21:45:25.265481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.503 [2024-12-10 21:45:25.265495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.503 [2024-12-10 21:45:25.269830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.503 [2024-12-10 21:45:25.269867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.503 [2024-12-10 21:45:25.269880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.503 [2024-12-10 21:45:25.274231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.503 [2024-12-10 21:45:25.274268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.503 [2024-12-10 21:45:25.274281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.503 [2024-12-10 21:45:25.278610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.503 [2024-12-10 21:45:25.278645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.503 [2024-12-10 21:45:25.278657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.761 [2024-12-10 21:45:25.283193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.761 [2024-12-10 21:45:25.283229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.761 [2024-12-10 21:45:25.283243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.761 [2024-12-10 21:45:25.287646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.761 [2024-12-10 21:45:25.287681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.287695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.292020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.292055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.292068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.296393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.296428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.296454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.300860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.300897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.300910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.305318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.305355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.305368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.309797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.309834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.309847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.314272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.314310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.314324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.318762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.318798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.318812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.323100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.323145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.323159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.327653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.327689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.327702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.332139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.332176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.332190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.336488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.336524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.336537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.340926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.340962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.340975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.345396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.345434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.345463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.349829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.349866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.349880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.354288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.354325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.354339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.358671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.358709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.358722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.363093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.363131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.363156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.367539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.367575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.367588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.372001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.372037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.372051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.376520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.376557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.376571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.380990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.381027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.381040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.385479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.385518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.385532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.389871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.389910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.394317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.394355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.394368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.398798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.398839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.398853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.403521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.762 [2024-12-10 21:45:25.403558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.762 [2024-12-10 21:45:25.403572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.762 [2024-12-10 21:45:25.408091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.408132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.408145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.412748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.412798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.412812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.417411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.417462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.417477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.421944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.421989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.422004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.426493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.426529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.426542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.431652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.431698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.431712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.436211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.436249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.436262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.440671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.440707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.440721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.445118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.445156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.445169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.449516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.449553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.449566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.453889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.453928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.453941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.458330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.458367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.458380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.462735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.462772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.462785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.467221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.467266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.467287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.471836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.471875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.471888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.476496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.476534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.476548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.480990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.481028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.481042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.485540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.485578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.485591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.490086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.490130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.490145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.494689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.494727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.494741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.499209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.499246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.499261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.503658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.503694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.503708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.508703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.508757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.508780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.514179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.514242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.514265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.518818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.518858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.518872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.523435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.523510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.523533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.528178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.528218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.528233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:24.763 [2024-12-10 21:45:25.532734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.763 [2024-12-10 21:45:25.532772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.763 [2024-12-10 21:45:25.532786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:24.764 [2024-12-10 21:45:25.537351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.764 [2024-12-10 21:45:25.537389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.764 [2024-12-10 21:45:25.537403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:24.764 [2024-12-10 21:45:25.541887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:24.764 [2024-12-10 21:45:25.541924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.764 [2024-12-10 21:45:25.541937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.546325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.546364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.546377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.550746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.550783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.550797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.555230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.555266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.555279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.559761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.559802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.559816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.564071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.564108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.564121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.568499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.568535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.568548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.572921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.572958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.572972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.577405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.577456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.577472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.581875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.581912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.581926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.586283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.586324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.586339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.590800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.590839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.590852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.022 [2024-12-10 21:45:25.595312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.022 [2024-12-10 21:45:25.595349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.022 [2024-12-10 21:45:25.595365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.023 6913.00 IOPS, 864.12 MiB/s [2024-12-10T21:45:25.806Z] [2024-12-10 21:45:25.600850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.600888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.600902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.605286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.605324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.605337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.609684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.609722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.609736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.614243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.614284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.614302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.619112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.619164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.619179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.623747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.623786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.623800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.628145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.628181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.628194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.632554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.632594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.632608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.637575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.637621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.642050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.642087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.642101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.646585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.646620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.651061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.651097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.651110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.655591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.655639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.655652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.660074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.660112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.660126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.664425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.664475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.664489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.668866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.668902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.668916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.673331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.673368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.673381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.677756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.677793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.677807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.682422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.682468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.682482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.686868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.686905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.686919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.691207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.691244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.691266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.695674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.695709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.695723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.700083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.700122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.700136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.704601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.704644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.704658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.709056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.709112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.709135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.713751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.713789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.713803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.718256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.023 [2024-12-10 21:45:25.718296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.023 [2024-12-10 21:45:25.718310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.023 [2024-12-10 21:45:25.722744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.722782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.722795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.727268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.727318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.727332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.731768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.731797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.731811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.736227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.736265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.736278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.740658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.740695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.740709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.745160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.745196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.745209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.749634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.749670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.749684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.754118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.754154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.754168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.758643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.758679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.758692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.763111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.763161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.763175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.767536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.767574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.767587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.771884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.771922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.771936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.776341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.776378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.776391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.780863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.780900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.780916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.785363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.785401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.785415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.789765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.789800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.789814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.794272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.794309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.794322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.024 [2024-12-10 21:45:25.798737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.024 [2024-12-10 21:45:25.798772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.024 [2024-12-10 21:45:25.798786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.803097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.803133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.803157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.807569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.807605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.807619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.812256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.812302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.812324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.817038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.817076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.817090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.821555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.821611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.821625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.826096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.826133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.826147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.830497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.830536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.830549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.834953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.834999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.835012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.839470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.839505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.839518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.843930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.843967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.843981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.848366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.848408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.848422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.852801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.852836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.852849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.857124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.857162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.857176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.861515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.861552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.861565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.866053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.866092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.866106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.870458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.870492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.870504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.875095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.875131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.875160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.879660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.879695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.879708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.884152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.884188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.284 [2024-12-10 21:45:25.884201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.284 [2024-12-10 21:45:25.888784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.284 [2024-12-10 21:45:25.888820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.888833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.893293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.893330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.893343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.897701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.897739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.897752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.902133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.902172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.902185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.906579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.906615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.906628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.910919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.910954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.910967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.915277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.915312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.915325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.919717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.919753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.919766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.924001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.924037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.924050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.928367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.928405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.928418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.933022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.933061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.933074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.937672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.937711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.937725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.941960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.941995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.942008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.946391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.946430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.946458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.950821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.950857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.950871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.955286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.955322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.955335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.960209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.960248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.960263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.965018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.965057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.965070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.969589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.969627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.969641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.974182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.974223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.974236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.978678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.978714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.978727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.983222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.983258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.983272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.987683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.987721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.987734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.992145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.992181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.992195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:25.996660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:25.996696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:25.996710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:26.001164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:26.001203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:26.001216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:26.005641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:26.005678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:26.005692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:26.010116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.285 [2024-12-10 21:45:26.010155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.285 [2024-12-10 21:45:26.010169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.285 [2024-12-10 21:45:26.014592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.014631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.014645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.018997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.019036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.019049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.023346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.023384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.023397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.027830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.027867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.027880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.032210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.032247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.032259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.036676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.036714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.036728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.041089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.041125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.041138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.045502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.045540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.045553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.049863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.049901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.049915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.054213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.054252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.054266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.058546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.058584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.058598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.286 [2024-12-10 21:45:26.062990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.286 [2024-12-10 21:45:26.063029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.286 [2024-12-10 21:45:26.063042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.067598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.067638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.067651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.072220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.072261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.072275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.076787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.076827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.076841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.081328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.081369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.081382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.085914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.085954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.085968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.090340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.090381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.090395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.094857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.094899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.094913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.099375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.099414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.099428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.103820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.103864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.103879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.108458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.108497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.108511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.112884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.112922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.112935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.117387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.117428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.117455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.121966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.122007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.122021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.126494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.126533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.126547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.131081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.131122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.131147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.135539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.135587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.135601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.140030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.140070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.140083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.144514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.144553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.144567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.149073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.149113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.149126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.153572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.153617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.153631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.158169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.158209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.158223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.162685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.162724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.546 [2024-12-10 21:45:26.162738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.546 [2024-12-10 21:45:26.167357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.546 [2024-12-10 21:45:26.167394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.167408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.171832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.171869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.171883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.176404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.176454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.176470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.180910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.180946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.180959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.185379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.185417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.185431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.189943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.189980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.189993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.194393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.194430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.194459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.198986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.199024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.199038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.203536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.203573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.203587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.208014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.208052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.208065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.212486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.212524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.212538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.217097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.217135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.217149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.222430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.222489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.222503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.227027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.227065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.227079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.231510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.231547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.231560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.235982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.236021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.236034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.240647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.240683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.240697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.245146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.245183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.245197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.249514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.249547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.249560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.253942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.253979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.253992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.258385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.258424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.258437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.262855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.262892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.262905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.267412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.267468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.267489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.272014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.272052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.272066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.276584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.276621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.276635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.281086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.281123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.281136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.285639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.547 [2024-12-10 21:45:26.285683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.547 [2024-12-10 21:45:26.285697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.547 [2024-12-10 21:45:26.290003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.290039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.290052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.294379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.294415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.294429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.298706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.298742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.298755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.303108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.303157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.303171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.307563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.307598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.307612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.312033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.312070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.312084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.316333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.316370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.316383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.320740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.320777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.320790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.548 [2024-12-10 21:45:26.325233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.548 [2024-12-10 21:45:26.325270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.548 [2024-12-10 21:45:26.325283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.329622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.329660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.329674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.334055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.334092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.334105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.338438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.338486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.338500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.342903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.342942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.342956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.347914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.347953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.347968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.352638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.352680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.352694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.357157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.357195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.357209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.361687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.361727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.361740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.366204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.366246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.366260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.370698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.370735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.370749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.375278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.375316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.375330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.379878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.379916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.379929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.384471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.384508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.384522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.389070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.389108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.389122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.393485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.393526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.393539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.398004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.398045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.398060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.402682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.808 [2024-12-10 21:45:26.402720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.808 [2024-12-10 21:45:26.402733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.808 [2024-12-10 21:45:26.407483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.407521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.407535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.411973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.412012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.412025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.416438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.416488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.416502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.421204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.421244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.421258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.425787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.425826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.425839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.430383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.430423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.430438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.435086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.435124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.435149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.439685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.439727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.439741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.444362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.444401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.448778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.448813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.448826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.453287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.453345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.453369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.457973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.458010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.458024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.462473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.462510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.462524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.467192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.467231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.467245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.471604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.471640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.471654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.476038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.476075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.476088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.480726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.480763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.480777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.485295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.485331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.485345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.489764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.489800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.489814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.494243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.494281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.494295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.498734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.498770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.498784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.503272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.503309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.503323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.507760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.507797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.507810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.512316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.512354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.512368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.516845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.516882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.516895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.521298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.521335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.521349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.525829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.809 [2024-12-10 21:45:26.525865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.809 [2024-12-10 21:45:26.525878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.809 [2024-12-10 21:45:26.530272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.530309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.530322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.534734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.534770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.534784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.539279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.539315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.539328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.543770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.543806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.543820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.548214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.548253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.548266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.552717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.552753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.552766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.557220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.557257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.557270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.561677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.561713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.561727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.566294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.566330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.566343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.570784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.570819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.570832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.575376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.575412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.575425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.579784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.579820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.579833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:25.810 [2024-12-10 21:45:26.584231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:25.810 [2024-12-10 21:45:26.584268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:25.810 [2024-12-10 21:45:26.584282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:26.069 [2024-12-10 21:45:26.588628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:26.069 [2024-12-10 21:45:26.588663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.069 [2024-12-10 21:45:26.588676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:26.069 [2024-12-10 21:45:26.593234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:26.069 [2024-12-10 21:45:26.593270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.069 [2024-12-10 21:45:26.593283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:26.069 [2024-12-10 21:45:26.597703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe86800) 00:18:26.069 [2024-12-10 21:45:26.597739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:26.069 [2024-12-10 21:45:26.597752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:26.069 6889.50 IOPS, 861.19 MiB/s 00:18:26.069 Latency(us) 00:18:26.069 [2024-12-10T21:45:26.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.069 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:26.069 nvme0n1 : 2.00 6886.08 860.76 0.00 0.00 2319.68 2070.34 5987.61 00:18:26.069 [2024-12-10T21:45:26.852Z] =================================================================================================================== 00:18:26.069 [2024-12-10T21:45:26.852Z] Total : 6886.08 860.76 0.00 0.00 2319.68 2070.34 5987.61 00:18:26.069 { 00:18:26.069 "results": [ 00:18:26.069 { 00:18:26.069 "job": "nvme0n1", 00:18:26.069 "core_mask": "0x2", 00:18:26.069 "workload": "randread", 00:18:26.069 "status": "finished", 00:18:26.069 "queue_depth": 16, 00:18:26.069 "io_size": 131072, 00:18:26.069 "runtime": 2.003318, 00:18:26.069 "iops": 6886.075999916139, 00:18:26.069 "mibps": 860.7594999895174, 00:18:26.069 "io_failed": 0, 00:18:26.069 "io_timeout": 0, 00:18:26.069 "avg_latency_us": 2319.682663415598, 00:18:26.069 "min_latency_us": 2070.3418181818183, 00:18:26.069 "max_latency_us": 5987.607272727273 00:18:26.069 } 00:18:26.069 ], 00:18:26.069 "core_count": 1 00:18:26.069 } 00:18:26.069 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:26.069 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:26.069 | .driver_specific 00:18:26.069 | .nvme_error 00:18:26.069 | .status_code 00:18:26.069 | .command_transient_transport_error' 00:18:26.069 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:26.069 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 445 > 0 )) 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80480 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80480 ']' 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80480 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80480 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:26.328 killing process with pid 80480 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80480' 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80480 00:18:26.328 Received shutdown signal, test time was about 2.000000 seconds 00:18:26.328 00:18:26.328 Latency(us) 00:18:26.328 [2024-12-10T21:45:27.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.328 [2024-12-10T21:45:27.111Z] =================================================================================================================== 00:18:26.328 [2024-12-10T21:45:27.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.328 21:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80480 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80533 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80533 /var/tmp/bperf.sock 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80533 ']' 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.587 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:26.587 [2024-12-10 21:45:27.165134] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:26.587 [2024-12-10 21:45:27.165225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80533 ] 00:18:26.587 [2024-12-10 21:45:27.311343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.587 [2024-12-10 21:45:27.355487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.845 [2024-12-10 21:45:27.389176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.845 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.845 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:26.845 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:26.845 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:27.103 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:27.103 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.103 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.103 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.103 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:27.103 21:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:27.362 nvme0n1 00:18:27.621 21:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:27.621 21:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.621 21:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:27.621 21:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.621 21:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:27.621 21:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:27.621 Running I/O for 2 seconds... 00:18:27.621 [2024-12-10 21:45:28.329696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef7100 00:18:27.621 [2024-12-10 21:45:28.331389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.621 [2024-12-10 21:45:28.331429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:27.621 [2024-12-10 21:45:28.346597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef7970 00:18:27.621 [2024-12-10 21:45:28.348250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.621 [2024-12-10 21:45:28.348286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.621 [2024-12-10 21:45:28.363248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef81e0 00:18:27.621 [2024-12-10 21:45:28.364871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.621 [2024-12-10 21:45:28.364902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:27.621 [2024-12-10 21:45:28.379921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef8a50 00:18:27.621 [2024-12-10 21:45:28.381515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.621 [2024-12-10 21:45:28.381547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:27.621 [2024-12-10 21:45:28.396567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef92c0 00:18:27.621 [2024-12-10 21:45:28.398131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.621 [2024-12-10 21:45:28.398163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:27.879 [2024-12-10 21:45:28.413210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef9b30 00:18:27.880 [2024-12-10 21:45:28.414761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.414793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.429827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efa3a0 00:18:27.880 [2024-12-10 21:45:28.431383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.431416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.446511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efac10 00:18:27.880 [2024-12-10 21:45:28.448034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.448065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.463163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efb480 00:18:27.880 [2024-12-10 21:45:28.464675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.464706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.479813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efbcf0 00:18:27.880 [2024-12-10 21:45:28.481278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.481310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.496524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efc560 00:18:27.880 [2024-12-10 21:45:28.497962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.497995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.513384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efcdd0 00:18:27.880 [2024-12-10 21:45:28.514838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.530394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efd640 00:18:27.880 [2024-12-10 21:45:28.531842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.531878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.547249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efdeb0 00:18:27.880 [2024-12-10 21:45:28.548656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.548692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.563953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efe720 00:18:27.880 [2024-12-10 21:45:28.565314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.565347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.580657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eff3c8 00:18:27.880 [2024-12-10 21:45:28.582007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.582040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.604415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eff3c8 00:18:27.880 [2024-12-10 21:45:28.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.607094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.621098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efe720 00:18:27.880 [2024-12-10 21:45:28.623752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.623785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.637986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efdeb0 00:18:27.880 [2024-12-10 21:45:28.640635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.640674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:27.880 [2024-12-10 21:45:28.654872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efd640 00:18:27.880 [2024-12-10 21:45:28.657474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.880 [2024-12-10 21:45:28.657510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.671662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efcdd0 00:18:28.139 [2024-12-10 21:45:28.674220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.674256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.688363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efc560 00:18:28.139 [2024-12-10 21:45:28.690912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.690947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.705958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efbcf0 00:18:28.139 [2024-12-10 21:45:28.708575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.708623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.722944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efb480 00:18:28.139 [2024-12-10 21:45:28.725461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.725500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.739724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efac10 00:18:28.139 [2024-12-10 21:45:28.742216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.742251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.756624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016efa3a0 00:18:28.139 [2024-12-10 21:45:28.759261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.759299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.773856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef9b30 00:18:28.139 [2024-12-10 21:45:28.776377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.776418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.791273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef92c0 00:18:28.139 [2024-12-10 21:45:28.793755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.793794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.808606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef8a50 00:18:28.139 [2024-12-10 21:45:28.811063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.811103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.825935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef81e0 00:18:28.139 [2024-12-10 21:45:28.828385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.828427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.843289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef7970 00:18:28.139 [2024-12-10 21:45:28.845699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.860644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef7100 00:18:28.139 [2024-12-10 21:45:28.863034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.863077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.878101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef6890 00:18:28.139 [2024-12-10 21:45:28.880523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.880564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-12-10 21:45:28.895533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef6020 00:18:28.139 [2024-12-10 21:45:28.897876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.139 [2024-12-10 21:45:28.897922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:28.140 [2024-12-10 21:45:28.913006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef57b0 00:18:28.140 [2024-12-10 21:45:28.915363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.140 [2024-12-10 21:45:28.915405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:28.398 [2024-12-10 21:45:28.930479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef4f40 00:18:28.398 [2024-12-10 21:45:28.932814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.398 [2024-12-10 21:45:28.932858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:28.398 [2024-12-10 21:45:28.947838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef46d0 00:18:28.398 [2024-12-10 21:45:28.950117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.398 [2024-12-10 21:45:28.950158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:28.398 [2024-12-10 21:45:28.965149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef3e60 00:18:28.398 [2024-12-10 21:45:28.967432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.398 [2024-12-10 21:45:28.967482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:28.398 [2024-12-10 21:45:28.982454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef35f0 00:18:28.398 [2024-12-10 21:45:28.984799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:28.984844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:28.999846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef2d80 00:18:28.399 [2024-12-10 21:45:29.002060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.002104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.017122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef2510 00:18:28.399 [2024-12-10 21:45:29.019338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.019390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.034322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef1ca0 00:18:28.399 [2024-12-10 21:45:29.036521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.036563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.051664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef1430 00:18:28.399 [2024-12-10 21:45:29.053896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.053950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.069370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef0bc0 00:18:28.399 [2024-12-10 21:45:29.071568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.071619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.086992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef0350 00:18:28.399 [2024-12-10 21:45:29.089174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.089222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.104687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eefae0 00:18:28.399 [2024-12-10 21:45:29.106816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.106863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.122035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eef270 00:18:28.399 [2024-12-10 21:45:29.124128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.124178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.139359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eeea00 00:18:28.399 [2024-12-10 21:45:29.141454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.141502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.156639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eee190 00:18:28.399 [2024-12-10 21:45:29.158666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.158708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.399 [2024-12-10 21:45:29.173810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eed920 00:18:28.399 [2024-12-10 21:45:29.175825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.399 [2024-12-10 21:45:29.175870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.191008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eed0b0 00:18:28.658 [2024-12-10 21:45:29.193027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.193072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.208264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eec840 00:18:28.658 [2024-12-10 21:45:29.210231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.210272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.225515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eebfd0 00:18:28.658 [2024-12-10 21:45:29.227461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.227508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.242377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eeb760 00:18:28.658 [2024-12-10 21:45:29.244264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.244306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.259073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eeaef0 00:18:28.658 [2024-12-10 21:45:29.260960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.261000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.275843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eea680 00:18:28.658 [2024-12-10 21:45:29.277869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.278051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.293051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee9e10 00:18:28.658 [2024-12-10 21:45:29.295061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.295267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.310342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016 14676.00 IOPS, 57.33 MiB/s [2024-12-10T21:45:29.441Z] ee95a0 00:18:28.658 [2024-12-10 21:45:29.312341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.312540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.328071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee8d30 00:18:28.658 [2024-12-10 21:45:29.330048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.330258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.345342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee84c0 00:18:28.658 [2024-12-10 21:45:29.347334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.347535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.362636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee7c50 00:18:28.658 [2024-12-10 21:45:29.364564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.364750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.379776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee73e0 00:18:28.658 [2024-12-10 21:45:29.381641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.381828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.397028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee6b70 00:18:28.658 [2024-12-10 21:45:29.399041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.399255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.414428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee6300 00:18:28.658 [2024-12-10 21:45:29.416300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.416480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:28.658 [2024-12-10 21:45:29.431632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee5a90 00:18:28.658 [2024-12-10 21:45:29.433319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.658 [2024-12-10 21:45:29.433366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.448771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee5220 00:18:28.917 [2024-12-10 21:45:29.450411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.450467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.465853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee49b0 00:18:28.917 [2024-12-10 21:45:29.467722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.467760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.482819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee4140 00:18:28.917 [2024-12-10 21:45:29.484402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.484582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.499620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee38d0 00:18:28.917 [2024-12-10 21:45:29.501178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.501217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.516252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee3060 00:18:28.917 [2024-12-10 21:45:29.517803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.517841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.532942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee27f0 00:18:28.917 [2024-12-10 21:45:29.534617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.534655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.549771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee1f80 00:18:28.917 [2024-12-10 21:45:29.551275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.551314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.566467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee1710 00:18:28.917 [2024-12-10 21:45:29.567960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.568000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.583151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee0ea0 00:18:28.917 [2024-12-10 21:45:29.584647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.584688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.599958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee0630 00:18:28.917 [2024-12-10 21:45:29.601568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.601603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.617170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016edfdc0 00:18:28.917 [2024-12-10 21:45:29.618598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.618758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.634549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016edf550 00:18:28.917 [2024-12-10 21:45:29.636136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.636323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.651853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016edece0 00:18:28.917 [2024-12-10 21:45:29.653395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.653597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.669152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ede470 00:18:28.917 [2024-12-10 21:45:29.670745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.670937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:28.917 [2024-12-10 21:45:29.693412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eddc00 00:18:28.917 [2024-12-10 21:45:29.696265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:28.917 [2024-12-10 21:45:29.696466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.710540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ede470 00:18:29.185 [2024-12-10 21:45:29.713354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.713551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.727837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016edece0 00:18:29.185 [2024-12-10 21:45:29.730627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.730810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.745171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016edf550 00:18:29.185 [2024-12-10 21:45:29.747962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.748144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.762507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016edfdc0 00:18:29.185 [2024-12-10 21:45:29.765581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.765770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.781560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee0630 00:18:29.185 [2024-12-10 21:45:29.784396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.784584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.798920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee0ea0 00:18:29.185 [2024-12-10 21:45:29.801648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.801826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.816125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee1710 00:18:29.185 [2024-12-10 21:45:29.818831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.819009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.833290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee1f80 00:18:29.185 [2024-12-10 21:45:29.835967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.836008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.850167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee27f0 00:18:29.185 [2024-12-10 21:45:29.852679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.852844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.867192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee3060 00:18:29.185 [2024-12-10 21:45:29.869655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.869701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.884127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee38d0 00:18:29.185 [2024-12-10 21:45:29.886607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.886649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.901002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee4140 00:18:29.185 [2024-12-10 21:45:29.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.903648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.919962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee49b0 00:18:29.185 [2024-12-10 21:45:29.923282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.923337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.938629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee5220 00:18:29.185 [2024-12-10 21:45:29.941242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.941457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:29.185 [2024-12-10 21:45:29.956091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee5a90 00:18:29.185 [2024-12-10 21:45:29.958669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.185 [2024-12-10 21:45:29.958863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:29.973579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee6300 00:18:29.443 [2024-12-10 21:45:29.976116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.443 [2024-12-10 21:45:29.976316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:29.991087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee6b70 00:18:29.443 [2024-12-10 21:45:29.993583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.443 [2024-12-10 21:45:29.993768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:30.008746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee73e0 00:18:29.443 [2024-12-10 21:45:30.011333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.443 [2024-12-10 21:45:30.011548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:30.026509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee7c50 00:18:29.443 [2024-12-10 21:45:30.029012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.443 [2024-12-10 21:45:30.029218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:30.044504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee84c0 00:18:29.443 [2024-12-10 21:45:30.047026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.443 [2024-12-10 21:45:30.047073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:30.061691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee8d30 00:18:29.443 [2024-12-10 21:45:30.063925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.443 [2024-12-10 21:45:30.064082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:30.078530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee95a0 00:18:29.443 [2024-12-10 21:45:30.080741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.443 [2024-12-10 21:45:30.080784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:29.443 [2024-12-10 21:45:30.095439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ee9e10 00:18:29.443 [2024-12-10 21:45:30.097633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.097680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:29.444 [2024-12-10 21:45:30.112314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eea680 00:18:29.444 [2024-12-10 21:45:30.114655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.114693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:29.444 [2024-12-10 21:45:30.129481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eeaef0 00:18:29.444 [2024-12-10 21:45:30.131625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.131674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:29.444 [2024-12-10 21:45:30.146386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eeb760 00:18:29.444 [2024-12-10 21:45:30.148521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.148565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:29.444 [2024-12-10 21:45:30.163162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eebfd0 00:18:29.444 [2024-12-10 21:45:30.165257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.165300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:29.444 [2024-12-10 21:45:30.179890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eec840 00:18:29.444 [2024-12-10 21:45:30.182123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.182161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:29.444 [2024-12-10 21:45:30.196923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eed0b0 00:18:29.444 [2024-12-10 21:45:30.199026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.199196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:29.444 [2024-12-10 21:45:30.213934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eed920 00:18:29.444 [2024-12-10 21:45:30.215972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.444 [2024-12-10 21:45:30.216015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:29.702 [2024-12-10 21:45:30.230735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eee190 00:18:29.702 [2024-12-10 21:45:30.232911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.702 [2024-12-10 21:45:30.232954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:29.702 [2024-12-10 21:45:30.247693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eeea00 00:18:29.702 [2024-12-10 21:45:30.249678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.702 [2024-12-10 21:45:30.249830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:29.702 [2024-12-10 21:45:30.264555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eef270 00:18:29.702 [2024-12-10 21:45:30.266528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.702 [2024-12-10 21:45:30.266572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:29.702 [2024-12-10 21:45:30.281271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016eefae0 00:18:29.702 [2024-12-10 21:45:30.283237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.702 [2024-12-10 21:45:30.283279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:29.702 [2024-12-10 21:45:30.298164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156a770) with pdu=0x200016ef0350 00:18:29.702 [2024-12-10 21:45:30.300397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.702 [2024-12-10 21:45:30.300461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:29.702 14675.00 IOPS, 57.32 MiB/s 00:18:29.702 Latency(us) 00:18:29.702 [2024-12-10T21:45:30.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.703 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.703 nvme0n1 : 2.00 14717.57 57.49 0.00 0.00 8688.10 8221.79 32887.16 00:18:29.703 [2024-12-10T21:45:30.486Z] =================================================================================================================== 00:18:29.703 [2024-12-10T21:45:30.486Z] Total : 14717.57 57.49 0.00 0.00 8688.10 8221.79 32887.16 00:18:29.703 { 00:18:29.703 "results": [ 00:18:29.703 { 00:18:29.703 "job": "nvme0n1", 00:18:29.703 "core_mask": "0x2", 00:18:29.703 "workload": "randwrite", 00:18:29.703 "status": "finished", 00:18:29.703 "queue_depth": 128, 00:18:29.703 "io_size": 4096, 00:18:29.703 "runtime": 2.002912, 00:18:29.703 "iops": 14717.571216309054, 00:18:29.703 "mibps": 57.49051256370724, 00:18:29.703 "io_failed": 0, 00:18:29.703 "io_timeout": 0, 00:18:29.703 "avg_latency_us": 8688.096796008116, 00:18:29.703 "min_latency_us": 8221.789090909091, 00:18:29.703 "max_latency_us": 32887.156363636364 00:18:29.703 } 00:18:29.703 ], 00:18:29.703 "core_count": 1 00:18:29.703 } 00:18:29.703 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:29.703 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:29.703 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:29.703 | .driver_specific 00:18:29.703 | .nvme_error 00:18:29.703 | .status_code 00:18:29.703 | .command_transient_transport_error' 00:18:29.703 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80533 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80533 ']' 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80533 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80533 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:29.961 killing process with pid 80533 00:18:29.961 Received shutdown signal, test time was about 2.000000 seconds 00:18:29.961 00:18:29.961 Latency(us) 00:18:29.961 [2024-12-10T21:45:30.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.961 [2024-12-10T21:45:30.744Z] =================================================================================================================== 00:18:29.961 [2024-12-10T21:45:30.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80533' 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80533 00:18:29.961 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80533 00:18:30.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80587 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80587 /var/tmp/bperf.sock 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80587 ']' 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.220 21:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:30.220 [2024-12-10 21:45:30.914408] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:30.220 [2024-12-10 21:45:30.914716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80587 ] 00:18:30.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:30.220 Zero copy mechanism will not be used. 00:18:30.478 [2024-12-10 21:45:31.053311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.478 [2024-12-10 21:45:31.087133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.478 [2024-12-10 21:45:31.119259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.478 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.478 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:30.478 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:30.478 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:31.044 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:31.044 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.044 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.044 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.044 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.044 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.302 nvme0n1 00:18:31.302 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:31.302 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.302 21:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:31.302 21:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.302 21:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:31.302 21:45:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:31.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:31.561 Zero copy mechanism will not be used. 00:18:31.561 Running I/O for 2 seconds... 00:18:31.561 [2024-12-10 21:45:32.141298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.141600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.141631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.147064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.147153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.147187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.152547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.152627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.152652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.157956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.158047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.158072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.163364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.163460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.163486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.168876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.169101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.169127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.174705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.174956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.175137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.180004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.180347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.180629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.185382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.185809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.185995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.190940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.191184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.191410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.196553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.196778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.196984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.201939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.561 [2024-12-10 21:45:32.202175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.561 [2024-12-10 21:45:32.202369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.561 [2024-12-10 21:45:32.207552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.207763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.207790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.213306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.213564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.213736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.218997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.219244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.219277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.224684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.224761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.224787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.230095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.230179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.230205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.235589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.235675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.235702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.241065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.241154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.241180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.246530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.246617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.246643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.252092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.252350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.252380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.258405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.258502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.258529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.264022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.264257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.264285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.269582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.269661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.269687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.275013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.275088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.275114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.280432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.280528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.280561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.285991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.286110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.291671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.291771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.291796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.297081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.297158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.297184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.302553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.302640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.302665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.308416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.308528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.308565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.313933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.314012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.314039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.319377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.319599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.319626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.325203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.325281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.325308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.330607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.330710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.330736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.336068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.336154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.336179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.562 [2024-12-10 21:45:32.341515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.562 [2024-12-10 21:45:32.341612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.562 [2024-12-10 21:45:32.341637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.821 [2024-12-10 21:45:32.346953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.821 [2024-12-10 21:45:32.347028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.821 [2024-12-10 21:45:32.347054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.821 [2024-12-10 21:45:32.352330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.821 [2024-12-10 21:45:32.352426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.821 [2024-12-10 21:45:32.352474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.821 [2024-12-10 21:45:32.357719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.821 [2024-12-10 21:45:32.357800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.821 [2024-12-10 21:45:32.357826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.821 [2024-12-10 21:45:32.363198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.821 [2024-12-10 21:45:32.363297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.821 [2024-12-10 21:45:32.363322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.821 [2024-12-10 21:45:32.368939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.821 [2024-12-10 21:45:32.369149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.821 [2024-12-10 21:45:32.369175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.821 [2024-12-10 21:45:32.374580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.821 [2024-12-10 21:45:32.374681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.374707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.380034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.380258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.380283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.385880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.385958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.385984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.391372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.391499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.391530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.397019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.397104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.397129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.402433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.402557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.402588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.407927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.408152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.413765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.413855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.413882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.419258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.419345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.419373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.425064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.425187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.425219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.430567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.430663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.430695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.436067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.436351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.436381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.441862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.441987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.442018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.447355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.447456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.447485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.452872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.452970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.453002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.458259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.458357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.458383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.463757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.463835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.463860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.469152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.469240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.469265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.474540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.474616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.474641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.479984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.480061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.480086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.485425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.485524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.485555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.491168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.491383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.491408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.496814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.496899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.496929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.502468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.502546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.502570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.507886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.507959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.507984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.513265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.513339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.513364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.518709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.518805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.518837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.524160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.524257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.822 [2024-12-10 21:45:32.524282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.822 [2024-12-10 21:45:32.529579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.822 [2024-12-10 21:45:32.529676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.529701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.535186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.535271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.535295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.540747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.540839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.540863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.546218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.546427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.546469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.551815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.551894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.551918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.557237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.557333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.557358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.562942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.563156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.563181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.568663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.568742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.568768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.574117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.574321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.574346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.579762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.579860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.579886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.585158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.585232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.585258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.590614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.590691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.590716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.596074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.596169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.596194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:31.823 [2024-12-10 21:45:32.601745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:31.823 [2024-12-10 21:45:32.601836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.823 [2024-12-10 21:45:32.601861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.607166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.607265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.607290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.612594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.612699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.612728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.618258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.618369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.618400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.623816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.623945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.623976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.629253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.629528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.629559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.635028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.635273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.635519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.640719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.640965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.641147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.646339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.646586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.646766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.652001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.652224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.652391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.657592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.657818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.657998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.663203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.663439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.663710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.668822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.669068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.669349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.674415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.674696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.674907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.680079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.680304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.680458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.685773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.685873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.685899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.691185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.691261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.691286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.696577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.696652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.696677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.702100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.702184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.702209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.707547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.707643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.707668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.713059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.713263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.713288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.718887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.719108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.719309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.724527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.724751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.724921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.730122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.730342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.730545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.735729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.735949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.736129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.741292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.741536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.083 [2024-12-10 21:45:32.741783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.083 [2024-12-10 21:45:32.746960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.083 [2024-12-10 21:45:32.747215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.747388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.752596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.752830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.753011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.758281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.758524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.758750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.763955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.764180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.764423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.769540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.769640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.769672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.774984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.775080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.775106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.780376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.780468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.780493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.785800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.785877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.785902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.791210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.791286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.791311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.796682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.796804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.802092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.802190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.802215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.807594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.807668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.807693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.812989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.813086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.813111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.818392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.818488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.818513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.823835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.823911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.823936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.829185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.829260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.829287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.834537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.834625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.834663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.839965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.840054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.840081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.845415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.845529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.850868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.851134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.851185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.856603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.856686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.856716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.084 [2024-12-10 21:45:32.862072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.084 [2024-12-10 21:45:32.862174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.084 [2024-12-10 21:45:32.862204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.867589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.867684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.867714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.873058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.873137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.873163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.878571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.878643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.878669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.883974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.884054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.884080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.889402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.889502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.889528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.894881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.894955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.894979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.900325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.900413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.900438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.905839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.905929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.905954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.911341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.911429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.911469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.917099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.917185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.917210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.923015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.923096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.923121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.928492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.928566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.928591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.934074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.934283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.934307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.939884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.939975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.940001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.945254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.945340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.945366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.950937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.951026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.951051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.956433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.956546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.956577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.962036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.962255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.962281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.967795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.967882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.967908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.973261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.973346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.973371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.978638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.978726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.978751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.984067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.984147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.984172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.989484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.989567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.989594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:32.994890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:32.994970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:32.994996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.344 [2024-12-10 21:45:33.000401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.344 [2024-12-10 21:45:33.000516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.344 [2024-12-10 21:45:33.000550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.005863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.006082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.006109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.011796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.011896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.011929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.017308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.017538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.017564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.022965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.023055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.023081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.028428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.028529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.028568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.033854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.033937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.033963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.039395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.039512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.039544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.045022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.045235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.045261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.050769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.050998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.051256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.056364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.056621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.056794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.061914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.062141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.062335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.067536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.067757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.067991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.073169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.073394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.073593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.078771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.078998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.079185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.084468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.084689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.084875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.090235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.090604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.090842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.096047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.096364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.096602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.101823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.102046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.102275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.107314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.107620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.107793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.112929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.113156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.113376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.345 [2024-12-10 21:45:33.118723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.345 [2024-12-10 21:45:33.118973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.345 [2024-12-10 21:45:33.119159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.124670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.124894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.125180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.130381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.130625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.130881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.136191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.136430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.136715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.605 5561.00 IOPS, 695.12 MiB/s [2024-12-10T21:45:33.388Z] [2024-12-10 21:45:33.143047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.143286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.143559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.148716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.148969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.149166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.154318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.154589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.154781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.159972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.160065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.160092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.165361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.165597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.165623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.170991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.171086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.171117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.176436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.176536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.176566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.181976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.182080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.182109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.187482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.187580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.187606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.192886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.193119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.193145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.198600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.198678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.198703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.605 [2024-12-10 21:45:33.204037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.605 [2024-12-10 21:45:33.204112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.605 [2024-12-10 21:45:33.204138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.209475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.209567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.209592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.214938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.215013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.215038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.220421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.220527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.220553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.225890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.225979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.226004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.231276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.231365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.231391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.236697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.236778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.236804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.242152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.242240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.242265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.247593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.247665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.247691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.253005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.253101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.253126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.258393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.258498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.258523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.263869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.263958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.263984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.269254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.269328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.269353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.274671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.274746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.274771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.280095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.280303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.280328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.285674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.285771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.285803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.291053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.291127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.291167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.296427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.296674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.296699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.301996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.302088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.302113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.307534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.307610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.307635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.312986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.313064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.313090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.318381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.318490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.318515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.323883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.323960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.323995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.329337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.329428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.329468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.334756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.334852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.606 [2024-12-10 21:45:33.334883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.606 [2024-12-10 21:45:33.340195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.606 [2024-12-10 21:45:33.340426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.340469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.345800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.345876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.345901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.351288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.351378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.351406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.357227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.357318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.357344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.362859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.362951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.362977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.368257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.368490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.368516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.373823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.373897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.373923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.379267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.379349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.379374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.607 [2024-12-10 21:45:33.384731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.607 [2024-12-10 21:45:33.384822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.607 [2024-12-10 21:45:33.384852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.390126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.390217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.866 [2024-12-10 21:45:33.390242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.395607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.395681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.866 [2024-12-10 21:45:33.395706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.401122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.401199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.866 [2024-12-10 21:45:33.401224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.406539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.406628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.866 [2024-12-10 21:45:33.406652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.411912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.412142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.866 [2024-12-10 21:45:33.412167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.417586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.417663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.866 [2024-12-10 21:45:33.417688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.423050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.423140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.866 [2024-12-10 21:45:33.423178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.866 [2024-12-10 21:45:33.428485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.866 [2024-12-10 21:45:33.428558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.428583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.433902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.433977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.439371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.439460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.439486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.444849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.444965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.444995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.450262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.450352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.450377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.455728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.455824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.455849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.461133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.461207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.461232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.466542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.466636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.466660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.472028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.472119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.472144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.477385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.477499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.477524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.482819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.482897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.482922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.488271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.488360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.488385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.493730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.493847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.493877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.499254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.499606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.499637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.505115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.505232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.510613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.510689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.510715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.516020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.516094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.516119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.521423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.521516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.521541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.526819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.526914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.526939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.532270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.532359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.532384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.537682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.537753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.537778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.543098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.543313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.543338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.548827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.548920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.548946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.554719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.554798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.554823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.560173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.560258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.560284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.565673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.565764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.565789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.571119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.571226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.571251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.576543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.576617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.867 [2024-12-10 21:45:33.576641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.867 [2024-12-10 21:45:33.581968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.867 [2024-12-10 21:45:33.582057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.582082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.587381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.587471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.587496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.592816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.592890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.592914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.598199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.598274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.598299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.603589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.603666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.603691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.608967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.609042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.609067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.614307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.614404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.614429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.619747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.619842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.619866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.625050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.625136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.625161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.630409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.630518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.630549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.635887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.635976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.636000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.868 [2024-12-10 21:45:33.641788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:32.868 [2024-12-10 21:45:33.641903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.868 [2024-12-10 21:45:33.641933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.647496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.647628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.647661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.653722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.653831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.653858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.659710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.659800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.659825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.665335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.665412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.665438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.670767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.670854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.670879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.676215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.676302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.676328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.681744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.681838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.681864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.687276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.687363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.687388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.692744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.692837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.692862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.698246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.698327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.698352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.127 [2024-12-10 21:45:33.703734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.127 [2024-12-10 21:45:33.703826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.127 [2024-12-10 21:45:33.703851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.709210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.709298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.709323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.714690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.714782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.714807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.720148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.720244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.720268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.726081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.726296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.726322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.731746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.731842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.731867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.737226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.737317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.737342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.742663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.742749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.742775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.748089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.748183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.748208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.753549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.753644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.753669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.758967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.759050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.759075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.764400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.764512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.764537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.769926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.770183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.770212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.775641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.775734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.775759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.781016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.781103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.781127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.786476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.786567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.786591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.791900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.791988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.792012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.797428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.797543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.797576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.803031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.803119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.803156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.808437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.808540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.808565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.813879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.814097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.814122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.819643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.819881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.820102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.825224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.825485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.825702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.830842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.831080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.831287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.836485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.836745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.836977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.842265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.842560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.842798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.848049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.848304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.848565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.853672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.853905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.854099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.128 [2024-12-10 21:45:33.859288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.128 [2024-12-10 21:45:33.859536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.128 [2024-12-10 21:45:33.859802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.864903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.865157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.865402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.870484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.870739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.871001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.876135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.876368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.876605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.881691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.881921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.882096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.887601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.887887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.888128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.893263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.893505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.893688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.898807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.898908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.898935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.129 [2024-12-10 21:45:33.904248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.129 [2024-12-10 21:45:33.904482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.129 [2024-12-10 21:45:33.904511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.909883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.909982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.910007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.915341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.915439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.915489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.920779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.920863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.920889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.926208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.926306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.931659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.931758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.931783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.937069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.937170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.937206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.942513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.942622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.942647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.947931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.948168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.948194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.953654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.953733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.953758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.959084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.959195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.959220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.964581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.964655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.964680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.969979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.970055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.970080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.975414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.975552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.975586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.980995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.981074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.981100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.986768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.986862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.986887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.992412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.992649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.388 [2024-12-10 21:45:33.992674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.388 [2024-12-10 21:45:33.998211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.388 [2024-12-10 21:45:33.998291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:33.998317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.003710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.003811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.003848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.009214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.009293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.009317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.014724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.014802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.014827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.020200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.020426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.020470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.025945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.026025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.026049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.031400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.031548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.031579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.036860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.036945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.036970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.042319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.042422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.042463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.048080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.048286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.048311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.053757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.053846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.053871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.059203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.059310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.059335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.064707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.064815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.064841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.070125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.070224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.070249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.075695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.075776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.075801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.081164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.081270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.081295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.086594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.086697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.086722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.092060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.092269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.092295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.097814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.098058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.098234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.103524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.103761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.104012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.109122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.109323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.109349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.114736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.114820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.114845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.120198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.120401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.120426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.125830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.125909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.131266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.131365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.131391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:33.389 [2024-12-10 21:45:34.136775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x156aab0) with pdu=0x200016eff3c8 00:18:33.389 [2024-12-10 21:45:34.136852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.389 [2024-12-10 21:45:34.136878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:33.389 5585.50 IOPS, 698.19 MiB/s 00:18:33.389 Latency(us) 00:18:33.389 [2024-12-10T21:45:34.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.389 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:33.389 nvme0n1 : 2.00 5583.61 697.95 0.00 0.00 2858.58 1921.40 7328.12 00:18:33.389 [2024-12-10T21:45:34.172Z] =================================================================================================================== 00:18:33.389 [2024-12-10T21:45:34.172Z] Total : 5583.61 697.95 0.00 0.00 2858.58 1921.40 7328.12 00:18:33.389 { 00:18:33.389 "results": [ 00:18:33.390 { 00:18:33.390 "job": "nvme0n1", 00:18:33.390 "core_mask": "0x2", 00:18:33.390 "workload": "randwrite", 00:18:33.390 "status": "finished", 00:18:33.390 "queue_depth": 16, 00:18:33.390 "io_size": 131072, 00:18:33.390 "runtime": 2.00426, 00:18:33.390 "iops": 5583.606917266223, 00:18:33.390 "mibps": 697.9508646582779, 00:18:33.390 "io_failed": 0, 00:18:33.390 "io_timeout": 0, 00:18:33.390 "avg_latency_us": 2858.5821658638033, 00:18:33.390 "min_latency_us": 1921.3963636363637, 00:18:33.390 "max_latency_us": 7328.1163636363635 00:18:33.390 } 00:18:33.390 ], 00:18:33.390 "core_count": 1 00:18:33.390 } 00:18:33.390 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:33.647 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:33.647 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:33.647 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:33.647 | .driver_specific 00:18:33.647 | .nvme_error 00:18:33.647 | .status_code 00:18:33.647 | .command_transient_transport_error' 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 361 > 0 )) 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80587 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80587 ']' 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80587 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80587 00:18:33.906 killing process with pid 80587 00:18:33.906 Received shutdown signal, test time was about 2.000000 seconds 00:18:33.906 00:18:33.906 Latency(us) 00:18:33.906 [2024-12-10T21:45:34.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.906 [2024-12-10T21:45:34.689Z] =================================================================================================================== 00:18:33.906 [2024-12-10T21:45:34.689Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80587' 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80587 00:18:33.906 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80587 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80395 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80395 ']' 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80395 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80395 00:18:34.164 killing process with pid 80395 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80395' 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80395 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80395 00:18:34.164 ************************************ 00:18:34.164 END TEST nvmf_digest_error 00:18:34.164 ************************************ 00:18:34.164 00:18:34.164 real 0m16.423s 00:18:34.164 user 0m33.269s 00:18:34.164 sys 0m4.391s 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:34.164 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:34.423 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.423 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:34.423 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.423 21:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.423 rmmod nvme_tcp 00:18:34.423 rmmod nvme_fabrics 00:18:34.423 rmmod nvme_keyring 00:18:34.423 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.423 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:34.423 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:34.423 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80395 ']' 00:18:34.423 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80395 00:18:34.423 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80395 ']' 00:18:34.423 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80395 00:18:34.423 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80395) - No such process 00:18:34.423 Process with pid 80395 is not found 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80395 is not found' 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.424 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:34.684 00:18:34.684 real 0m32.988s 00:18:34.684 user 1m4.578s 00:18:34.684 sys 0m9.145s 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.684 ************************************ 00:18:34.684 END TEST nvmf_digest 00:18:34.684 ************************************ 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.684 ************************************ 00:18:34.684 START TEST nvmf_host_multipath 00:18:34.684 ************************************ 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:34.684 * Looking for test storage... 00:18:34.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.684 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.942 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.942 --rc genhtml_branch_coverage=1 00:18:34.942 --rc genhtml_function_coverage=1 00:18:34.943 --rc genhtml_legend=1 00:18:34.943 --rc geninfo_all_blocks=1 00:18:34.943 --rc geninfo_unexecuted_blocks=1 00:18:34.943 00:18:34.943 ' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.943 --rc genhtml_branch_coverage=1 00:18:34.943 --rc genhtml_function_coverage=1 00:18:34.943 --rc genhtml_legend=1 00:18:34.943 --rc geninfo_all_blocks=1 00:18:34.943 --rc geninfo_unexecuted_blocks=1 00:18:34.943 00:18:34.943 ' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.943 --rc genhtml_branch_coverage=1 00:18:34.943 --rc genhtml_function_coverage=1 00:18:34.943 --rc genhtml_legend=1 00:18:34.943 --rc geninfo_all_blocks=1 00:18:34.943 --rc geninfo_unexecuted_blocks=1 00:18:34.943 00:18:34.943 ' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.943 --rc genhtml_branch_coverage=1 00:18:34.943 --rc genhtml_function_coverage=1 00:18:34.943 --rc genhtml_legend=1 00:18:34.943 --rc geninfo_all_blocks=1 00:18:34.943 --rc geninfo_unexecuted_blocks=1 00:18:34.943 00:18:34.943 ' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.943 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:34.943 Cannot find device "nvmf_init_br" 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:34.943 Cannot find device "nvmf_init_br2" 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:34.943 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:34.944 Cannot find device "nvmf_tgt_br" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.944 Cannot find device "nvmf_tgt_br2" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:34.944 Cannot find device "nvmf_init_br" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:34.944 Cannot find device "nvmf_init_br2" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:34.944 Cannot find device "nvmf_tgt_br" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:34.944 Cannot find device "nvmf_tgt_br2" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:34.944 Cannot find device "nvmf_br" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:34.944 Cannot find device "nvmf_init_if" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:34.944 Cannot find device "nvmf_init_if2" 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.944 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:35.202 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.202 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:18:35.202 00:18:35.202 --- 10.0.0.3 ping statistics --- 00:18:35.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.202 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:35.202 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:35.202 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:35.202 00:18:35.202 --- 10.0.0.4 ping statistics --- 00:18:35.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.202 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:35.202 00:18:35.202 --- 10.0.0.1 ping statistics --- 00:18:35.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.202 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:35.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:35.202 00:18:35.202 --- 10.0.0.2 ping statistics --- 00:18:35.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.202 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80903 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80903 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80903 ']' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.202 21:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:35.202 [2024-12-10 21:45:35.973068] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:18:35.202 [2024-12-10 21:45:35.973150] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.461 [2024-12-10 21:45:36.123418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:35.461 [2024-12-10 21:45:36.181247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.461 [2024-12-10 21:45:36.181325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.461 [2024-12-10 21:45:36.181348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.461 [2024-12-10 21:45:36.181365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.461 [2024-12-10 21:45:36.181380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.461 [2024-12-10 21:45:36.182471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.461 [2024-12-10 21:45:36.182492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.461 [2024-12-10 21:45:36.223553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80903 00:18:35.719 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:35.977 [2024-12-10 21:45:36.662015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.977 21:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:36.235 Malloc0 00:18:36.492 21:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:36.750 21:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.007 21:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:37.265 [2024-12-10 21:45:37.984723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:37.266 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:37.831 [2024-12-10 21:45:38.312921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80952 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80952 /var/tmp/bdevperf.sock 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80952 ']' 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.831 21:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:38.763 21:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.763 21:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:38.763 21:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:39.020 21:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:39.278 Nvme0n1 00:18:39.278 21:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:39.843 Nvme0n1 00:18:39.843 21:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:39.843 21:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:40.778 21:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:40.778 21:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:41.037 21:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:41.295 21:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:41.295 21:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81003 00:18:41.295 21:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:41.295 21:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.853 Attaching 4 probes... 00:18:47.853 @path[10.0.0.3, 4421]: 17208 00:18:47.853 @path[10.0.0.3, 4421]: 17272 00:18:47.853 @path[10.0.0.3, 4421]: 17184 00:18:47.853 @path[10.0.0.3, 4421]: 16520 00:18:47.853 @path[10.0.0.3, 4421]: 17133 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81003 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.853 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:47.854 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:47.854 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:48.419 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:48.419 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81115 00:18:48.419 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:48.419 21:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:54.977 21:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:54.977 21:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:54.977 Attaching 4 probes... 00:18:54.977 @path[10.0.0.3, 4420]: 17198 00:18:54.977 @path[10.0.0.3, 4420]: 17585 00:18:54.977 @path[10.0.0.3, 4420]: 17377 00:18:54.977 @path[10.0.0.3, 4420]: 17484 00:18:54.977 @path[10.0.0.3, 4420]: 17404 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81115 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:54.977 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:55.235 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:55.235 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81229 00:18:55.235 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.235 21:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:01.793 21:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:01.793 21:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.793 Attaching 4 probes... 00:19:01.793 @path[10.0.0.3, 4421]: 13562 00:19:01.793 @path[10.0.0.3, 4421]: 17128 00:19:01.793 @path[10.0.0.3, 4421]: 17262 00:19:01.793 @path[10.0.0.3, 4421]: 16727 00:19:01.793 @path[10.0.0.3, 4421]: 16978 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81229 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:01.793 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:02.051 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:02.051 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81347 00:19:02.051 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:02.051 21:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:08.609 21:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:08.609 21:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.609 Attaching 4 probes... 00:19:08.609 00:19:08.609 00:19:08.609 00:19:08.609 00:19:08.609 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81347 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:08.609 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:09.175 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:09.175 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81460 00:19:09.175 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:09.175 21:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:15.739 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:15.739 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:15.739 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:15.739 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.739 Attaching 4 probes... 00:19:15.739 @path[10.0.0.3, 4421]: 16995 00:19:15.739 @path[10.0.0.3, 4421]: 17264 00:19:15.739 @path[10.0.0.3, 4421]: 16784 00:19:15.739 @path[10.0.0.3, 4421]: 17240 00:19:15.739 @path[10.0.0.3, 4421]: 17241 00:19:15.739 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:15.739 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:15.740 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:15.740 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:15.740 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.740 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.740 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81460 00:19:15.740 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.740 21:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:15.740 21:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:16.672 21:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:16.672 21:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81582 00:19:16.672 21:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:16.672 21:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.235 Attaching 4 probes... 00:19:23.235 @path[10.0.0.3, 4420]: 16684 00:19:23.235 @path[10.0.0.3, 4420]: 17048 00:19:23.235 @path[10.0.0.3, 4420]: 17039 00:19:23.235 @path[10.0.0.3, 4420]: 16448 00:19:23.235 @path[10.0.0.3, 4420]: 16925 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81582 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:23.235 [2024-12-10 21:46:23.833036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:23.235 21:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:23.499 21:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:30.060 21:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:30.060 21:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81758 00:19:30.060 21:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:30.060 21:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.631 Attaching 4 probes... 00:19:36.631 @path[10.0.0.3, 4421]: 16361 00:19:36.631 @path[10.0.0.3, 4421]: 16617 00:19:36.631 @path[10.0.0.3, 4421]: 16825 00:19:36.631 @path[10.0.0.3, 4421]: 16820 00:19:36.631 @path[10.0.0.3, 4421]: 16123 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81758 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80952 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80952 ']' 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80952 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80952 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:36.631 killing process with pid 80952 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80952' 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80952 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80952 00:19:36.631 { 00:19:36.631 "results": [ 00:19:36.631 { 00:19:36.631 "job": "Nvme0n1", 00:19:36.631 "core_mask": "0x4", 00:19:36.631 "workload": "verify", 00:19:36.631 "status": "terminated", 00:19:36.631 "verify_range": { 00:19:36.631 "start": 0, 00:19:36.631 "length": 16384 00:19:36.631 }, 00:19:36.631 "queue_depth": 128, 00:19:36.631 "io_size": 4096, 00:19:36.631 "runtime": 55.918785, 00:19:36.631 "iops": 7290.322921000519, 00:19:36.631 "mibps": 28.47782391015828, 00:19:36.631 "io_failed": 0, 00:19:36.631 "io_timeout": 0, 00:19:36.631 "avg_latency_us": 17525.482743154713, 00:19:36.631 "min_latency_us": 1243.6945454545455, 00:19:36.631 "max_latency_us": 7046430.72 00:19:36.631 } 00:19:36.631 ], 00:19:36.631 "core_count": 1 00:19:36.631 } 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80952 00:19:36.631 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:36.631 [2024-12-10 21:45:38.399657] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:19:36.631 [2024-12-10 21:45:38.399800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80952 ] 00:19:36.631 [2024-12-10 21:45:38.548850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.631 [2024-12-10 21:45:38.582129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.631 [2024-12-10 21:45:38.612016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.631 Running I/O for 90 seconds... 00:19:36.631 8618.00 IOPS, 33.66 MiB/s [2024-12-10T21:46:37.414Z] 8663.00 IOPS, 33.84 MiB/s [2024-12-10T21:46:37.414Z] 8716.67 IOPS, 34.05 MiB/s [2024-12-10T21:46:37.414Z] 8694.75 IOPS, 33.96 MiB/s [2024-12-10T21:46:37.414Z] 8682.20 IOPS, 33.91 MiB/s [2024-12-10T21:46:37.414Z] 8625.00 IOPS, 33.69 MiB/s [2024-12-10T21:46:37.414Z] 8600.86 IOPS, 33.60 MiB/s [2024-12-10T21:46:37.414Z] 8571.38 IOPS, 33.48 MiB/s [2024-12-10T21:46:37.414Z] [2024-12-10 21:45:48.892581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.892661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.892706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.892730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.892759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.892779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.892808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.892828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.892855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.892875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.892902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.892923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.892950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.892970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.892997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.631 [2024-12-10 21:45:48.893016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.893044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.631 [2024-12-10 21:45:48.893064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.893092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.631 [2024-12-10 21:45:48.893150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.893182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.631 [2024-12-10 21:45:48.893204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.893232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.631 [2024-12-10 21:45:48.893252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.631 [2024-12-10 21:45:48.893280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.631 [2024-12-10 21:45:48.893300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.893852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.893873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.894954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.894974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.895021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.895069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.895118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.895180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.632 [2024-12-10 21:45:48.895231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.632 [2024-12-10 21:45:48.895642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.632 [2024-12-10 21:45:48.895662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.895689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.895709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.895736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.895756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.895785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.895804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.896887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.896924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.896960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.896983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.897953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.897981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.633 [2024-12-10 21:45:48.898370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.898418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.898485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.898534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.898924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.898957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.898979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.633 [2024-12-10 21:45:48.899544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.633 [2024-12-10 21:45:48.899563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.899610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.899658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.899717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.899765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.899812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.899869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.899917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.899966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.899993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.900013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.900061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.634 [2024-12-10 21:45:48.900110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.900960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.900996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.634 [2024-12-10 21:45:48.901572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.634 [2024-12-10 21:45:48.901610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.901631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.901679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.901774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.901826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.901875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.901923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.901971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.901998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.635 [2024-12-10 21:45:48.902573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.902656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.902705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.902753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.902801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.902861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.902916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.902958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.902980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.903008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.903028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.903056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.903076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.903103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.903123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.903150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.903186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.903216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.903236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.903264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.635 [2024-12-10 21:45:48.903284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.635 [2024-12-10 21:45:48.903312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.903697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.903746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.903794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.903841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.903889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.903937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.903965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.903985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.904032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.904080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.636 [2024-12-10 21:45:48.904962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.904990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.905037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.905085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.905138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.905187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.905234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.905281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.636 [2024-12-10 21:45:48.905329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.636 [2024-12-10 21:45:48.905349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.905376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.905414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.905460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.905483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.907787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.907826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.907863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.907885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.907915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.907936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.907963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.907983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.908658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.908706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.908754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.908802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.908850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.908897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.908945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.908983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.909004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.637 [2024-12-10 21:45:48.909053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.637 [2024-12-10 21:45:48.909745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.637 [2024-12-10 21:45:48.909765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.909793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.909813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.909844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.909865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.909892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.909912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.909940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.909961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.909988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.910731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.910779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.910826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.910882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.910932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.910960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.910980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.638 [2024-12-10 21:45:48.911573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.911621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.911669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.911716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.911764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.638 [2024-12-10 21:45:48.911791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.638 [2024-12-10 21:45:48.911811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.911839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.911859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.911886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.911906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.911936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.911956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.911984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.912636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.912684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.912733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.912791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.912841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.912889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.912937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.912964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.912984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.639 [2024-12-10 21:45:48.913032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.639 [2024-12-10 21:45:48.913648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.639 [2024-12-10 21:45:48.913668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.913695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:48.913715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.913753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:48.913773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.913801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:48.913821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.913848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:48.913869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.913897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.913917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.922324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.922379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.922465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.922516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.922564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.922611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:48.922658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.922686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:48.922706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:48.924575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:48.924617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.640 8541.22 IOPS, 33.36 MiB/s [2024-12-10T21:46:37.423Z] 8559.90 IOPS, 33.44 MiB/s [2024-12-10T21:46:37.423Z] 8579.55 IOPS, 33.51 MiB/s [2024-12-10T21:46:37.423Z] 8591.92 IOPS, 33.56 MiB/s [2024-12-10T21:46:37.423Z] 8604.54 IOPS, 33.61 MiB/s [2024-12-10T21:46:37.423Z] 8611.86 IOPS, 33.64 MiB/s [2024-12-10T21:46:37.423Z] [2024-12-10 21:45:55.571758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.571835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.571897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.571919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.571944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.571961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.571999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.572076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.572118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.572157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.640 [2024-12-10 21:45:55.572195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.640 [2024-12-10 21:45:55.572830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.640 [2024-12-10 21:45:55.572846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.572874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.572892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.572914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.572931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.572953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.572968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.572991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.641 [2024-12-10 21:45:55.573824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.573968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.573984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.641 [2024-12-10 21:45:55.574462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.641 [2024-12-10 21:45:55.574488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.574505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.574973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.574990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.642 [2024-12-10 21:45:55.575847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.575967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.575990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.576006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.576029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.576045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.576068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.576084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.642 [2024-12-10 21:45:55.576117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.642 [2024-12-10 21:45:55.576135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.576935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.643 [2024-12-10 21:45:55.576967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.577964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.577980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.578010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.578026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.578056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.578073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.578102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.578128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:45:55.578159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:45:55.578176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.643 8603.07 IOPS, 33.61 MiB/s [2024-12-10T21:46:37.426Z] 8071.56 IOPS, 31.53 MiB/s [2024-12-10T21:46:37.426Z] 8101.71 IOPS, 31.65 MiB/s [2024-12-10T21:46:37.426Z] 8129.39 IOPS, 31.76 MiB/s [2024-12-10T21:46:37.426Z] 8155.42 IOPS, 31.86 MiB/s [2024-12-10T21:46:37.426Z] 8158.05 IOPS, 31.87 MiB/s [2024-12-10T21:46:37.426Z] 8182.14 IOPS, 31.96 MiB/s [2024-12-10T21:46:37.426Z] 8206.95 IOPS, 32.06 MiB/s [2024-12-10T21:46:37.426Z] [2024-12-10 21:46:02.705516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.643 [2024-12-10 21:46:02.705926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.643 [2024-12-10 21:46:02.705967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.705990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.643 [2024-12-10 21:46:02.706006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.706060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.643 [2024-12-10 21:46:02.706077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.706100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.643 [2024-12-10 21:46:02.706116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.643 [2024-12-10 21:46:02.706155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.643 [2024-12-10 21:46:02.706178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.644 [2024-12-10 21:46:02.706616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.706982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.706998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.644 [2024-12-10 21:46:02.707607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.644 [2024-12-10 21:46:02.707632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.707648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.707968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.707984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.645 [2024-12-10 21:46:02.708317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.708972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.708988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.709027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.709066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.709105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.709153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.709193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.709232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.645 [2024-12-10 21:46:02.709271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.645 [2024-12-10 21:46:02.709294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.709649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.709968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.709985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.710024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.710063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.710102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.710148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.710189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.710228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.710276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.710998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.646 [2024-12-10 21:46:02.711028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.646 [2024-12-10 21:46:02.711816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.646 [2024-12-10 21:46:02.711850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:02.711875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.647 7890.48 IOPS, 30.82 MiB/s [2024-12-10T21:46:37.430Z] 7561.71 IOPS, 29.54 MiB/s [2024-12-10T21:46:37.430Z] 7259.24 IOPS, 28.36 MiB/s [2024-12-10T21:46:37.430Z] 6980.04 IOPS, 27.27 MiB/s [2024-12-10T21:46:37.430Z] 6721.52 IOPS, 26.26 MiB/s [2024-12-10T21:46:37.430Z] 6481.46 IOPS, 25.32 MiB/s [2024-12-10T21:46:37.430Z] 6257.97 IOPS, 24.45 MiB/s [2024-12-10T21:46:37.430Z] 6301.27 IOPS, 24.61 MiB/s [2024-12-10T21:46:37.430Z] 6376.97 IOPS, 24.91 MiB/s [2024-12-10T21:46:37.430Z] 6446.44 IOPS, 25.18 MiB/s [2024-12-10T21:46:37.430Z] 6506.36 IOPS, 25.42 MiB/s [2024-12-10T21:46:37.430Z] 6568.41 IOPS, 25.66 MiB/s [2024-12-10T21:46:37.430Z] 6626.91 IOPS, 25.89 MiB/s [2024-12-10T21:46:37.430Z] [2024-12-10 21:46:16.227197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.227634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.227975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.227991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.647 [2024-12-10 21:46:16.228602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.228676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.228711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.228742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.228774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.647 [2024-12-10 21:46:16.228791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.647 [2024-12-10 21:46:16.228806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.228822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.228837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.228854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.228869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.228885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.228900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.228926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.228943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.228960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.228974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.228991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.229006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.229037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.229068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.229099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.229130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.229162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.648 [2024-12-10 21:46:16.229962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.648 [2024-12-10 21:46:16.230009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.648 [2024-12-10 21:46:16.230024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.649 -12-10 21:46:16.230571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.230754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.230974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.230997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.649 [2024-12-10 21:46:16.231280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.231311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.649 [2024-12-10 21:46:16.231342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.649 [2024-12-10 21:46:16.231358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.650 [2024-12-10 21:46:16.231373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.650 [2024-12-10 21:46:16.231411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.650 [2024-12-10 21:46:16.231453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.650 [2024-12-10 21:46:16.231488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.650 [2024-12-10 21:46:16.231521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.650 [2024-12-10 21:46:16.231553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.231584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.231616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.231647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.231678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.231710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.231741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.231775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.231791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191d290 is same with the state(6) to be set 00:19:36.650 [2024-12-10 21:46:16.231809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:36.650 [2024-12-10 21:46:16.231820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:36.650 [2024-12-10 21:46:16.231838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64424 len:8 PRP1 0x0 PRP2 0x0 00:19:36.650 [2024-12-10 21:46:16.231854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.233052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:36.650 [2024-12-10 21:46:16.233136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.650 [2024-12-10 21:46:16.233162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.650 [2024-12-10 21:46:16.233203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188de90 (9): Bad file descriptor 00:19:36.650 [2024-12-10 21:46:16.233630] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.650 [2024-12-10 21:46:16.233664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x188de90 with addr=10.0.0.3, port=4421 00:19:36.650 [2024-12-10 21:46:16.233682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188de90 is same with the state(6) to be set 00:19:36.650 [2024-12-10 21:46:16.233718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188de90 (9): Bad file descriptor 00:19:36.650 [2024-12-10 21:46:16.233751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:36.650 [2024-12-10 21:46:16.233769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:36.650 [2024-12-10 21:46:16.233785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:36.650 [2024-12-10 21:46:16.233807] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:36.650 [2024-12-10 21:46:16.233826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:36.650 6680.86 IOPS, 26.10 MiB/s [2024-12-10T21:46:37.433Z] 6727.76 IOPS, 26.28 MiB/s [2024-12-10T21:46:37.433Z] 6772.61 IOPS, 26.46 MiB/s [2024-12-10T21:46:37.433Z] 6817.62 IOPS, 26.63 MiB/s [2024-12-10T21:46:37.433Z] 6860.38 IOPS, 26.80 MiB/s [2024-12-10T21:46:37.433Z] 6899.68 IOPS, 26.95 MiB/s [2024-12-10T21:46:37.433Z] 6930.26 IOPS, 27.07 MiB/s [2024-12-10T21:46:37.433Z] 6968.72 IOPS, 27.22 MiB/s [2024-12-10T21:46:37.433Z] 7004.86 IOPS, 27.36 MiB/s [2024-12-10T21:46:37.433Z] 7040.29 IOPS, 27.50 MiB/s [2024-12-10T21:46:37.433Z] [2024-12-10 21:46:26.308255] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:36.650 7074.13 IOPS, 27.63 MiB/s [2024-12-10T21:46:37.433Z] 7106.47 IOPS, 27.76 MiB/s [2024-12-10T21:46:37.433Z] 7132.06 IOPS, 27.86 MiB/s [2024-12-10T21:46:37.433Z] 7150.16 IOPS, 27.93 MiB/s [2024-12-10T21:46:37.433Z] 7172.52 IOPS, 28.02 MiB/s [2024-12-10T21:46:37.433Z] 7193.76 IOPS, 28.10 MiB/s [2024-12-10T21:46:37.433Z] 7215.19 IOPS, 28.18 MiB/s [2024-12-10T21:46:37.433Z] 7237.77 IOPS, 28.27 MiB/s [2024-12-10T21:46:37.433Z] 7259.89 IOPS, 28.36 MiB/s [2024-12-10T21:46:37.433Z] 7274.51 IOPS, 28.42 MiB/s [2024-12-10T21:46:37.433Z] Received shutdown signal, test time was about 55.919624 seconds 00:19:36.650 00:19:36.650 Latency(us) 00:19:36.650 [2024-12-10T21:46:37.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.650 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.650 Verification LBA range: start 0x0 length 0x4000 00:19:36.650 Nvme0n1 : 55.92 7290.32 28.48 0.00 0.00 17525.48 1243.69 7046430.72 00:19:36.650 [2024-12-10T21:46:37.433Z] =================================================================================================================== 00:19:36.650 [2024-12-10T21:46:37.433Z] Total : 7290.32 28.48 0.00 0.00 17525.48 1243.69 7046430.72 00:19:36.650 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:36.650 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:36.650 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:36.650 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:36.650 21:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.650 rmmod nvme_tcp 00:19:36.650 rmmod nvme_fabrics 00:19:36.650 rmmod nvme_keyring 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80903 ']' 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80903 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80903 ']' 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80903 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80903 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.650 killing process with pid 80903 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80903' 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80903 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80903 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:36.650 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.651 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:36.651 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:36.651 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:36.651 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:36.651 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:36.651 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:36.910 00:19:36.910 real 1m2.220s 00:19:36.910 user 2m53.495s 00:19:36.910 sys 0m18.250s 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:36.910 ************************************ 00:19:36.910 END TEST nvmf_host_multipath 00:19:36.910 ************************************ 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.910 ************************************ 00:19:36.910 START TEST nvmf_timeout 00:19:36.910 ************************************ 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:36.910 * Looking for test storage... 00:19:36.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:36.910 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.169 --rc genhtml_branch_coverage=1 00:19:37.169 --rc genhtml_function_coverage=1 00:19:37.169 --rc genhtml_legend=1 00:19:37.169 --rc geninfo_all_blocks=1 00:19:37.169 --rc geninfo_unexecuted_blocks=1 00:19:37.169 00:19:37.169 ' 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.169 --rc genhtml_branch_coverage=1 00:19:37.169 --rc genhtml_function_coverage=1 00:19:37.169 --rc genhtml_legend=1 00:19:37.169 --rc geninfo_all_blocks=1 00:19:37.169 --rc geninfo_unexecuted_blocks=1 00:19:37.169 00:19:37.169 ' 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.169 --rc genhtml_branch_coverage=1 00:19:37.169 --rc genhtml_function_coverage=1 00:19:37.169 --rc genhtml_legend=1 00:19:37.169 --rc geninfo_all_blocks=1 00:19:37.169 --rc geninfo_unexecuted_blocks=1 00:19:37.169 00:19:37.169 ' 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.169 --rc genhtml_branch_coverage=1 00:19:37.169 --rc genhtml_function_coverage=1 00:19:37.169 --rc genhtml_legend=1 00:19:37.169 --rc geninfo_all_blocks=1 00:19:37.169 --rc geninfo_unexecuted_blocks=1 00:19:37.169 00:19:37.169 ' 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.169 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:37.170 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:37.170 Cannot find device "nvmf_init_br" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:37.170 Cannot find device "nvmf_init_br2" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:37.170 Cannot find device "nvmf_tgt_br" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.170 Cannot find device "nvmf_tgt_br2" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:37.170 Cannot find device "nvmf_init_br" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:37.170 Cannot find device "nvmf_init_br2" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:37.170 Cannot find device "nvmf_tgt_br" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:37.170 Cannot find device "nvmf_tgt_br2" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:37.170 Cannot find device "nvmf_br" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:37.170 Cannot find device "nvmf_init_if" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:37.170 Cannot find device "nvmf_init_if2" 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:37.170 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.429 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.429 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.429 21:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:37.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:37.429 00:19:37.429 --- 10.0.0.3 ping statistics --- 00:19:37.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.429 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:37.429 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:37.429 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:19:37.429 00:19:37.429 --- 10.0.0.4 ping statistics --- 00:19:37.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.429 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:37.429 00:19:37.429 --- 10.0.0.1 ping statistics --- 00:19:37.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.429 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:37.429 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:37.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:19:37.430 00:19:37.430 --- 10.0.0.2 ping statistics --- 00:19:37.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.430 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.430 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82120 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82120 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82120 ']' 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.688 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.688 [2024-12-10 21:46:38.272225] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:19:37.688 [2024-12-10 21:46:38.272316] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.688 [2024-12-10 21:46:38.419431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:37.688 [2024-12-10 21:46:38.456745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.688 [2024-12-10 21:46:38.456806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.688 [2024-12-10 21:46:38.456821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.688 [2024-12-10 21:46:38.456831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.688 [2024-12-10 21:46:38.456841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.688 [2024-12-10 21:46:38.460479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.688 [2024-12-10 21:46:38.460517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.946 [2024-12-10 21:46:38.492906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.946 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:38.204 [2024-12-10 21:46:38.819566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.204 21:46:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:38.462 Malloc0 00:19:38.462 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.720 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.979 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:39.237 [2024-12-10 21:46:39.950603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82156 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82156 /var/tmp/bdevperf.sock 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82156 ']' 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.237 21:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.237 [2024-12-10 21:46:40.014876] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:19:39.237 [2024-12-10 21:46:40.014960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82156 ] 00:19:39.495 [2024-12-10 21:46:40.157528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.495 [2024-12-10 21:46:40.189715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.495 [2024-12-10 21:46:40.218992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.753 21:46:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.753 21:46:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:39.753 21:46:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:40.011 21:46:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:40.269 NVMe0n1 00:19:40.270 21:46:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82172 00:19:40.270 21:46:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.270 21:46:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:40.270 Running I/O for 10 seconds... 00:19:41.205 21:46:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:41.466 8640.00 IOPS, 33.75 MiB/s [2024-12-10T21:46:42.249Z] [2024-12-10 21:46:42.231227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.466 [2024-12-10 21:46:42.231912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.466 [2024-12-10 21:46:42.231984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.466 [2024-12-10 21:46:42.231993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.467 [2024-12-10 21:46:42.232773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.467 [2024-12-10 21:46:42.232813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.467 [2024-12-10 21:46:42.232824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.232844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.232864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.232884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.232903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.232923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.232944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.232974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.232988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.468 [2024-12-10 21:46:42.233270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.468 [2024-12-10 21:46:42.233668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.468 [2024-12-10 21:46:42.233678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7100 is same with the state(6) to be set 00:19:41.469 [2024-12-10 21:46:42.233700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83992 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84000 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84008 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84016 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84024 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84416 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84424 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84432 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.233970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.233978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84440 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.233986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.233996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.234003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.234010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84448 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.234019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.234034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.234042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84456 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.234051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.234066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.234074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84464 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.234083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.469 [2024-12-10 21:46:42.234099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.469 [2024-12-10 21:46:42.234106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84472 len:8 PRP1 0x0 PRP2 0x0 00:19:41.469 [2024-12-10 21:46:42.234114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.469 [2024-12-10 21:46:42.234259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.469 [2024-12-10 21:46:42.234279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.469 [2024-12-10 21:46:42.234299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.469 [2024-12-10 21:46:42.234317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.469 [2024-12-10 21:46:42.234326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2049070 is same with the state(6) to be set 00:19:41.469 [2024-12-10 21:46:42.234579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:41.469 [2024-12-10 21:46:42.234612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2049070 (9): Bad file descriptor 00:19:41.469 [2024-12-10 21:46:42.234708] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.469 [2024-12-10 21:46:42.234730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2049070 with addr=10.0.0.3, port=4420 00:19:41.469 [2024-12-10 21:46:42.234741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2049070 is same with the state(6) to be set 00:19:41.469 [2024-12-10 21:46:42.234758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2049070 (9): Bad file descriptor 00:19:41.469 [2024-12-10 21:46:42.234774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:41.469 [2024-12-10 21:46:42.234783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:41.469 [2024-12-10 21:46:42.234793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:41.469 [2024-12-10 21:46:42.234804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:41.469 [2024-12-10 21:46:42.234814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:41.728 21:46:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:43.671 5216.00 IOPS, 20.38 MiB/s [2024-12-10T21:46:44.454Z] 3477.33 IOPS, 13.58 MiB/s [2024-12-10T21:46:44.454Z] [2024-12-10 21:46:44.235102] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.671 [2024-12-10 21:46:44.235168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2049070 with addr=10.0.0.3, port=4420 00:19:43.671 [2024-12-10 21:46:44.235185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2049070 is same with the state(6) to be set 00:19:43.671 [2024-12-10 21:46:44.235220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2049070 (9): Bad file descriptor 00:19:43.671 [2024-12-10 21:46:44.235254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:43.671 [2024-12-10 21:46:44.235266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:43.671 [2024-12-10 21:46:44.235276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:43.671 [2024-12-10 21:46:44.235288] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:43.671 [2024-12-10 21:46:44.235300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:43.671 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:43.671 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:43.671 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:43.929 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:43.929 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:43.929 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:43.929 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:44.187 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:44.188 21:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:45.379 2608.00 IOPS, 10.19 MiB/s [2024-12-10T21:46:46.421Z] 2086.40 IOPS, 8.15 MiB/s [2024-12-10T21:46:46.421Z] [2024-12-10 21:46:46.235563] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.638 [2024-12-10 21:46:46.235631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2049070 with addr=10.0.0.3, port=4420 00:19:45.638 [2024-12-10 21:46:46.235649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2049070 is same with the state(6) to be set 00:19:45.638 [2024-12-10 21:46:46.235677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2049070 (9): Bad file descriptor 00:19:45.638 [2024-12-10 21:46:46.235697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:45.638 [2024-12-10 21:46:46.235708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:45.638 [2024-12-10 21:46:46.235719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:45.638 [2024-12-10 21:46:46.235730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:45.638 [2024-12-10 21:46:46.235742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:47.507 1738.67 IOPS, 6.79 MiB/s [2024-12-10T21:46:48.290Z] 1490.29 IOPS, 5.82 MiB/s [2024-12-10T21:46:48.290Z] [2024-12-10 21:46:48.235891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:47.507 [2024-12-10 21:46:48.236121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:47.507 [2024-12-10 21:46:48.236143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:47.507 [2024-12-10 21:46:48.236157] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:47.507 [2024-12-10 21:46:48.236169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:48.700 1304.00 IOPS, 5.09 MiB/s 00:19:48.700 Latency(us) 00:19:48.700 [2024-12-10T21:46:49.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.700 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.700 Verification LBA range: start 0x0 length 0x4000 00:19:48.700 NVMe0n1 : 8.20 1271.57 4.97 15.60 0.00 99301.20 4081.11 7015926.69 00:19:48.700 [2024-12-10T21:46:49.483Z] =================================================================================================================== 00:19:48.700 [2024-12-10T21:46:49.483Z] Total : 1271.57 4.97 15.60 0.00 99301.20 4081.11 7015926.69 00:19:48.700 { 00:19:48.700 "results": [ 00:19:48.700 { 00:19:48.700 "job": "NVMe0n1", 00:19:48.700 "core_mask": "0x4", 00:19:48.700 "workload": "verify", 00:19:48.700 "status": "finished", 00:19:48.700 "verify_range": { 00:19:48.700 "start": 0, 00:19:48.700 "length": 16384 00:19:48.700 }, 00:19:48.700 "queue_depth": 128, 00:19:48.700 "io_size": 4096, 00:19:48.700 "runtime": 8.204063, 00:19:48.700 "iops": 1271.5650769624758, 00:19:48.700 "mibps": 4.967051081884671, 00:19:48.700 "io_failed": 128, 00:19:48.700 "io_timeout": 0, 00:19:48.700 "avg_latency_us": 99301.20392286501, 00:19:48.700 "min_latency_us": 4081.1054545454544, 00:19:48.700 "max_latency_us": 7015926.69090909 00:19:48.700 } 00:19:48.700 ], 00:19:48.700 "core_count": 1 00:19:48.700 } 00:19:49.292 21:46:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:49.292 21:46:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:49.292 21:46:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:49.551 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:49.551 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:49.551 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:49.551 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82172 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82156 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82156 ']' 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82156 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82156 00:19:50.117 killing process with pid 82156 00:19:50.117 Received shutdown signal, test time was about 9.632842 seconds 00:19:50.117 00:19:50.117 Latency(us) 00:19:50.117 [2024-12-10T21:46:50.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.117 [2024-12-10T21:46:50.900Z] =================================================================================================================== 00:19:50.117 [2024-12-10T21:46:50.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82156' 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82156 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82156 00:19:50.117 21:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:50.376 [2024-12-10 21:46:51.079595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82295 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82295 /var/tmp/bdevperf.sock 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82295 ']' 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.376 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.635 [2024-12-10 21:46:51.155588] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:19:50.635 [2024-12-10 21:46:51.155713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82295 ] 00:19:50.635 [2024-12-10 21:46:51.307879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.635 [2024-12-10 21:46:51.341406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.635 [2024-12-10 21:46:51.371070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.893 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.893 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:50.893 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:51.152 21:46:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:51.410 NVMe0n1 00:19:51.410 21:46:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82311 00:19:51.410 21:46:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.410 21:46:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:51.668 Running I/O for 10 seconds... 00:19:52.608 21:46:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:52.608 6937.00 IOPS, 27.10 MiB/s [2024-12-10T21:46:53.391Z] [2024-12-10 21:46:53.358160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.608 [2024-12-10 21:46:53.358611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.608 [2024-12-10 21:46:53.358620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.358982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.358994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.609 [2024-12-10 21:46:53.359519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.609 [2024-12-10 21:46:53.359530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.359982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.359993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.610 [2024-12-10 21:46:53.360334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.610 [2024-12-10 21:46:53.360354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.610 [2024-12-10 21:46:53.360365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.611 [2024-12-10 21:46:53.360555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.360972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.360984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.611 [2024-12-10 21:46:53.361079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.611 [2024-12-10 21:46:53.361100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.611 [2024-12-10 21:46:53.361120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.611 [2024-12-10 21:46:53.361144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.611 [2024-12-10 21:46:53.361178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.611 [2024-12-10 21:46:53.361209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.611 [2024-12-10 21:46:53.361382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.611 [2024-12-10 21:46:53.361394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2165100 is same with the state(6) to be set 00:19:52.611 [2024-12-10 21:46:53.361408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:52.611 [2024-12-10 21:46:53.361416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:52.611 [2024-12-10 21:46:53.361425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:19:52.612 [2024-12-10 21:46:53.361437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.612 [2024-12-10 21:46:53.361615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.612 [2024-12-10 21:46:53.361643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.612 [2024-12-10 21:46:53.361663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.612 [2024-12-10 21:46:53.361679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.612 [2024-12-10 21:46:53.361695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.612 [2024-12-10 21:46:53.361705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.612 [2024-12-10 21:46:53.361715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.612 [2024-12-10 21:46:53.361724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.612 [2024-12-10 21:46:53.361734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:19:52.612 [2024-12-10 21:46:53.361955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:52.612 [2024-12-10 21:46:53.361977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:19:52.612 [2024-12-10 21:46:53.362074] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.612 [2024-12-10 21:46:53.362096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f7070 with addr=10.0.0.3, port=4420 00:19:52.612 [2024-12-10 21:46:53.362107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:19:52.612 [2024-12-10 21:46:53.362125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:19:52.612 [2024-12-10 21:46:53.362141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:52.612 [2024-12-10 21:46:53.362151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:52.612 [2024-12-10 21:46:53.362161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:52.612 [2024-12-10 21:46:53.362172] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:52.612 [2024-12-10 21:46:53.362182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:52.871 21:46:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:53.697 3981.00 IOPS, 15.55 MiB/s [2024-12-10T21:46:54.480Z] [2024-12-10 21:46:54.362325] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.697 [2024-12-10 21:46:54.362598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f7070 with addr=10.0.0.3, port=4420 00:19:53.697 [2024-12-10 21:46:54.362839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:19:53.697 [2024-12-10 21:46:54.363013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:19:53.697 [2024-12-10 21:46:54.363281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:53.697 [2024-12-10 21:46:54.363428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:53.697 [2024-12-10 21:46:54.363519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:53.697 [2024-12-10 21:46:54.363630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:53.697 [2024-12-10 21:46:54.363781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:53.697 21:46:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:53.955 [2024-12-10 21:46:54.701794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:53.955 21:46:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82311 00:19:54.779 2654.00 IOPS, 10.37 MiB/s [2024-12-10T21:46:55.562Z] [2024-12-10 21:46:55.379672] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:56.656 1990.50 IOPS, 7.78 MiB/s [2024-12-10T21:46:58.376Z] 3126.80 IOPS, 12.21 MiB/s [2024-12-10T21:46:59.311Z] 4129.67 IOPS, 16.13 MiB/s [2024-12-10T21:47:00.248Z] 4851.14 IOPS, 18.95 MiB/s [2024-12-10T21:47:01.626Z] 5386.25 IOPS, 21.04 MiB/s [2024-12-10T21:47:02.561Z] 5726.44 IOPS, 22.37 MiB/s [2024-12-10T21:47:02.561Z] 6011.40 IOPS, 23.48 MiB/s 00:20:01.778 Latency(us) 00:20:01.778 [2024-12-10T21:47:02.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.778 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.778 Verification LBA range: start 0x0 length 0x4000 00:20:01.778 NVMe0n1 : 10.01 6015.75 23.50 0.00 0.00 21231.34 3187.43 3019898.88 00:20:01.778 [2024-12-10T21:47:02.561Z] =================================================================================================================== 00:20:01.778 [2024-12-10T21:47:02.561Z] Total : 6015.75 23.50 0.00 0.00 21231.34 3187.43 3019898.88 00:20:01.778 { 00:20:01.778 "results": [ 00:20:01.778 { 00:20:01.778 "job": "NVMe0n1", 00:20:01.778 "core_mask": "0x4", 00:20:01.778 "workload": "verify", 00:20:01.778 "status": "finished", 00:20:01.778 "verify_range": { 00:20:01.778 "start": 0, 00:20:01.778 "length": 16384 00:20:01.778 }, 00:20:01.778 "queue_depth": 128, 00:20:01.778 "io_size": 4096, 00:20:01.778 "runtime": 10.01138, 00:20:01.778 "iops": 6015.754071866217, 00:20:01.778 "mibps": 23.49903934322741, 00:20:01.778 "io_failed": 0, 00:20:01.778 "io_timeout": 0, 00:20:01.778 "avg_latency_us": 21231.341231422248, 00:20:01.778 "min_latency_us": 3187.4327272727273, 00:20:01.778 "max_latency_us": 3019898.88 00:20:01.778 } 00:20:01.778 ], 00:20:01.778 "core_count": 1 00:20:01.778 } 00:20:01.778 21:47:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82416 00:20:01.778 21:47:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.778 21:47:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:01.778 Running I/O for 10 seconds... 00:20:02.713 21:47:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:02.974 8857.00 IOPS, 34.60 MiB/s [2024-12-10T21:47:03.757Z] [2024-12-10 21:47:03.533749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25987a0 is same with the state(6) to be set 00:20:02.974 [2024-12-10 21:47:03.534294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.974 [2024-12-10 21:47:03.534326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.974 [2024-12-10 21:47:03.534357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-12-10 21:47:03.534380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-12-10 21:47:03.534401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-12-10 21:47:03.534422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-12-10 21:47:03.534457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-12-10 21:47:03.534481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-12-10 21:47:03.534502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.974 [2024-12-10 21:47:03.534522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.974 [2024-12-10 21:47:03.534534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.534543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.534888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.534909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.534929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.534950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.534970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.534981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.534990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.535011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.535032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.975 [2024-12-10 21:47:03.535053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.975 [2024-12-10 21:47:03.535381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.975 [2024-12-10 21:47:03.535393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.535911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.535984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.535993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.536013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.536033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.536054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.976 [2024-12-10 21:47:03.536075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.976 [2024-12-10 21:47:03.536228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.976 [2024-12-10 21:47:03.536238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.977 [2024-12-10 21:47:03.536493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.977 [2024-12-10 21:47:03.536513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.977 [2024-12-10 21:47:03.536533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.977 [2024-12-10 21:47:03.536554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.977 [2024-12-10 21:47:03.536574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.977 [2024-12-10 21:47:03.536594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.977 [2024-12-10 21:47:03.536615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.977 [2024-12-10 21:47:03.536636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2166010 is same with the state(6) to be set 00:20:02.977 [2024-12-10 21:47:03.536658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.536970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.536977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.536985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.536994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.537003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.537011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.537018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.537027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.537037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.537044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.537052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:20:02.977 [2024-12-10 21:47:03.537061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.977 [2024-12-10 21:47:03.537071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.977 [2024-12-10 21:47:03.537078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.977 [2024-12-10 21:47:03.537086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:20:02.978 [2024-12-10 21:47:03.537095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.537105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.978 [2024-12-10 21:47:03.537114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.978 [2024-12-10 21:47:03.537123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:20:02.978 [2024-12-10 21:47:03.537132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.537141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.978 [2024-12-10 21:47:03.537149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.978 [2024-12-10 21:47:03.537156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:20:02.978 [2024-12-10 21:47:03.537165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.537175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.978 [2024-12-10 21:47:03.537182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.978 [2024-12-10 21:47:03.537190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:20:02.978 [2024-12-10 21:47:03.537199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.537209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.978 [2024-12-10 21:47:03.537216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.978 [2024-12-10 21:47:03.537224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:20:02.978 [2024-12-10 21:47:03.537233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.537243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.978 [2024-12-10 21:47:03.537250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.978 [2024-12-10 21:47:03.537262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:20:02.978 [2024-12-10 21:47:03.537271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.550903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:02.978 [2024-12-10 21:47:03.550941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:02.978 [2024-12-10 21:47:03.550957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:20:02.978 [2024-12-10 21:47:03.550972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.551126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.978 [2024-12-10 21:47:03.551149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.551165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.978 [2024-12-10 21:47:03.551179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.551193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.978 [2024-12-10 21:47:03.551206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.551238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.978 [2024-12-10 21:47:03.551264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.978 [2024-12-10 21:47:03.551273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:20:02.978 [2024-12-10 21:47:03.551534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:02.978 [2024-12-10 21:47:03.551562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:20:02.978 [2024-12-10 21:47:03.551658] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:02.978 [2024-12-10 21:47:03.551679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f7070 with addr=10.0.0.3, port=4420 00:20:02.978 [2024-12-10 21:47:03.551690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:20:02.978 [2024-12-10 21:47:03.551708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:20:02.978 [2024-12-10 21:47:03.551724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:02.978 [2024-12-10 21:47:03.551733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:02.978 [2024-12-10 21:47:03.551759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:02.978 [2024-12-10 21:47:03.551774] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:02.978 [2024-12-10 21:47:03.551789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:02.978 21:47:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:03.913 4931.50 IOPS, 19.26 MiB/s [2024-12-10T21:47:04.696Z] [2024-12-10 21:47:04.551948] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.913 [2024-12-10 21:47:04.552023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f7070 with addr=10.0.0.3, port=4420 00:20:03.913 [2024-12-10 21:47:04.552042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:20:03.913 [2024-12-10 21:47:04.552069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:20:03.913 [2024-12-10 21:47:04.552089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:03.913 [2024-12-10 21:47:04.552099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:03.913 [2024-12-10 21:47:04.552110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:03.913 [2024-12-10 21:47:04.552121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:03.913 [2024-12-10 21:47:04.552132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:04.873 3287.67 IOPS, 12.84 MiB/s [2024-12-10T21:47:05.656Z] [2024-12-10 21:47:05.552285] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:04.873 [2024-12-10 21:47:05.552373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f7070 with addr=10.0.0.3, port=4420 00:20:04.873 [2024-12-10 21:47:05.552391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:20:04.873 [2024-12-10 21:47:05.552417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:20:04.873 [2024-12-10 21:47:05.552437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:04.873 [2024-12-10 21:47:05.552466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:04.873 [2024-12-10 21:47:05.552478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:04.873 [2024-12-10 21:47:05.552489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:04.873 [2024-12-10 21:47:05.552501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:05.809 2465.75 IOPS, 9.63 MiB/s [2024-12-10T21:47:06.592Z] [2024-12-10 21:47:06.556264] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.809 [2024-12-10 21:47:06.556340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f7070 with addr=10.0.0.3, port=4420 00:20:05.809 [2024-12-10 21:47:06.556356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7070 is same with the state(6) to be set 00:20:05.809 [2024-12-10 21:47:06.556641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f7070 (9): Bad file descriptor 00:20:05.809 [2024-12-10 21:47:06.556897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:05.809 [2024-12-10 21:47:06.556911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:05.809 [2024-12-10 21:47:06.556922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:05.809 [2024-12-10 21:47:06.556933] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:05.809 [2024-12-10 21:47:06.556944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:05.809 21:47:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:06.376 [2024-12-10 21:47:06.853374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:06.376 21:47:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82416 00:20:06.943 1972.60 IOPS, 7.71 MiB/s [2024-12-10T21:47:07.726Z] [2024-12-10 21:47:07.584217] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:20:08.812 2903.67 IOPS, 11.34 MiB/s [2024-12-10T21:47:10.528Z] 3788.29 IOPS, 14.80 MiB/s [2024-12-10T21:47:11.462Z] 4446.75 IOPS, 17.37 MiB/s [2024-12-10T21:47:12.835Z] 4955.33 IOPS, 19.36 MiB/s [2024-12-10T21:47:12.835Z] 5364.60 IOPS, 20.96 MiB/s 00:20:12.052 Latency(us) 00:20:12.052 [2024-12-10T21:47:12.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.052 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.052 Verification LBA range: start 0x0 length 0x4000 00:20:12.052 NVMe0n1 : 10.01 5371.58 20.98 3505.60 0.00 14391.13 703.77 3035150.89 00:20:12.052 [2024-12-10T21:47:12.835Z] =================================================================================================================== 00:20:12.052 [2024-12-10T21:47:12.835Z] Total : 5371.58 20.98 3505.60 0.00 14391.13 0.00 3035150.89 00:20:12.052 { 00:20:12.052 "results": [ 00:20:12.052 { 00:20:12.052 "job": "NVMe0n1", 00:20:12.052 "core_mask": "0x4", 00:20:12.052 "workload": "verify", 00:20:12.052 "status": "finished", 00:20:12.052 "verify_range": { 00:20:12.052 "start": 0, 00:20:12.052 "length": 16384 00:20:12.052 }, 00:20:12.052 "queue_depth": 128, 00:20:12.052 "io_size": 4096, 00:20:12.052 "runtime": 10.010833, 00:20:12.052 "iops": 5371.580966339165, 00:20:12.052 "mibps": 20.982738149762362, 00:20:12.052 "io_failed": 35094, 00:20:12.052 "io_timeout": 0, 00:20:12.052 "avg_latency_us": 14391.129515727103, 00:20:12.052 "min_latency_us": 703.7672727272727, 00:20:12.052 "max_latency_us": 3035150.8945454545 00:20:12.052 } 00:20:12.052 ], 00:20:12.052 "core_count": 1 00:20:12.052 } 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82295 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82295 ']' 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82295 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82295 00:20:12.052 killing process with pid 82295 00:20:12.052 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.052 00:20:12.052 Latency(us) 00:20:12.052 [2024-12-10T21:47:12.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.052 [2024-12-10T21:47:12.835Z] =================================================================================================================== 00:20:12.052 [2024-12-10T21:47:12.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82295' 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82295 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82295 00:20:12.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82530 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82530 /var/tmp/bdevperf.sock 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82530 ']' 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.052 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:12.052 [2024-12-10 21:47:12.692885] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:20:12.052 [2024-12-10 21:47:12.693183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82530 ] 00:20:12.311 [2024-12-10 21:47:12.835771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.311 [2024-12-10 21:47:12.868230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.311 [2024-12-10 21:47:12.897290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:12.311 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.311 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:12.311 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82539 00:20:12.311 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82530 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:12.311 21:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:12.570 21:47:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:13.137 NVMe0n1 00:20:13.137 21:47:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82580 00:20:13.137 21:47:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.137 21:47:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:13.137 Running I/O for 10 seconds... 00:20:14.072 21:47:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:14.334 14478.00 IOPS, 56.55 MiB/s [2024-12-10T21:47:15.117Z] [2024-12-10 21:47:14.943884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.334 [2024-12-10 21:47:14.944705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.944991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6b50 is same with the state(6) to be set 00:20:14.335 [2024-12-10 21:47:14.945282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.335 [2024-12-10 21:47:14.945587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.335 [2024-12-10 21:47:14.945596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.945980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.945991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.336 [2024-12-10 21:47:14.946424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.336 [2024-12-10 21:47:14.946433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.946979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.946989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.337 [2024-12-10 21:47:14.947280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.337 [2024-12-10 21:47:14.947290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.338 [2024-12-10 21:47:14.947953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.947963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9100 is same with the state(6) to be set 00:20:14.338 [2024-12-10 21:47:14.947975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:14.338 [2024-12-10 21:47:14.947983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:14.338 [2024-12-10 21:47:14.947991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44552 len:8 PRP1 0x0 PRP2 0x0 00:20:14.338 [2024-12-10 21:47:14.948000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.338 [2024-12-10 21:47:14.948331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:14.339 [2024-12-10 21:47:14.948591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3b070 (9): Bad file descriptor 00:20:14.339 [2024-12-10 21:47:14.948707] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.339 [2024-12-10 21:47:14.948730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3b070 with addr=10.0.0.3, port=4420 00:20:14.339 [2024-12-10 21:47:14.948741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3b070 is same with the state(6) to be set 00:20:14.339 [2024-12-10 21:47:14.948760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3b070 (9): Bad file descriptor 00:20:14.339 [2024-12-10 21:47:14.948776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:14.339 [2024-12-10 21:47:14.948786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:14.339 [2024-12-10 21:47:14.948796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:14.339 [2024-12-10 21:47:14.948808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:14.339 [2024-12-10 21:47:14.948819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:14.339 21:47:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82580 00:20:16.208 8319.50 IOPS, 32.50 MiB/s [2024-12-10T21:47:16.991Z] 5546.33 IOPS, 21.67 MiB/s [2024-12-10T21:47:16.991Z] [2024-12-10 21:47:16.948986] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:16.208 [2024-12-10 21:47:16.949058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3b070 with addr=10.0.0.3, port=4420 00:20:16.208 [2024-12-10 21:47:16.949075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3b070 is same with the state(6) to be set 00:20:16.208 [2024-12-10 21:47:16.949102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3b070 (9): Bad file descriptor 00:20:16.208 [2024-12-10 21:47:16.949121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:16.208 [2024-12-10 21:47:16.949131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:16.208 [2024-12-10 21:47:16.949142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:16.208 [2024-12-10 21:47:16.949154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:16.208 [2024-12-10 21:47:16.949165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:18.127 4159.75 IOPS, 16.25 MiB/s [2024-12-10T21:47:19.169Z] 3327.80 IOPS, 13.00 MiB/s [2024-12-10T21:47:19.169Z] [2024-12-10 21:47:18.949333] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.386 [2024-12-10 21:47:18.949408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3b070 with addr=10.0.0.3, port=4420 00:20:18.386 [2024-12-10 21:47:18.949425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3b070 is same with the state(6) to be set 00:20:18.386 [2024-12-10 21:47:18.949469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3b070 (9): Bad file descriptor 00:20:18.386 [2024-12-10 21:47:18.949492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:18.386 [2024-12-10 21:47:18.949512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:18.386 [2024-12-10 21:47:18.949524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:18.386 [2024-12-10 21:47:18.949536] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:18.386 [2024-12-10 21:47:18.949548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:20.259 2773.17 IOPS, 10.83 MiB/s [2024-12-10T21:47:21.042Z] 2377.00 IOPS, 9.29 MiB/s [2024-12-10T21:47:21.042Z] [2024-12-10 21:47:20.949642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:20.259 [2024-12-10 21:47:20.949719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:20.259 [2024-12-10 21:47:20.949733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:20.259 [2024-12-10 21:47:20.949744] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:20:20.259 [2024-12-10 21:47:20.949756] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:21.194 2079.88 IOPS, 8.12 MiB/s 00:20:21.194 Latency(us) 00:20:21.194 [2024-12-10T21:47:21.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.194 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:21.194 NVMe0n1 : 8.17 2037.55 7.96 15.67 0.00 62233.33 8460.10 7015926.69 00:20:21.194 [2024-12-10T21:47:21.977Z] =================================================================================================================== 00:20:21.194 [2024-12-10T21:47:21.977Z] Total : 2037.55 7.96 15.67 0.00 62233.33 8460.10 7015926.69 00:20:21.194 { 00:20:21.194 "results": [ 00:20:21.194 { 00:20:21.194 "job": "NVMe0n1", 00:20:21.194 "core_mask": "0x4", 00:20:21.194 "workload": "randread", 00:20:21.194 "status": "finished", 00:20:21.194 "queue_depth": 128, 00:20:21.194 "io_size": 4096, 00:20:21.194 "runtime": 8.166166, 00:20:21.194 "iops": 2037.5534859320762, 00:20:21.194 "mibps": 7.959193304422173, 00:20:21.194 "io_failed": 128, 00:20:21.194 "io_timeout": 0, 00:20:21.194 "avg_latency_us": 62233.331975688176, 00:20:21.194 "min_latency_us": 8460.101818181818, 00:20:21.194 "max_latency_us": 7015926.69090909 00:20:21.194 } 00:20:21.194 ], 00:20:21.194 "core_count": 1 00:20:21.194 } 00:20:21.194 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:21.194 Attaching 5 probes... 00:20:21.194 1480.654386: reset bdev controller NVMe0 00:20:21.194 1480.978080: reconnect bdev controller NVMe0 00:20:21.194 3481.187007: reconnect delay bdev controller NVMe0 00:20:21.194 3481.211124: reconnect bdev controller NVMe0 00:20:21.194 5481.536477: reconnect delay bdev controller NVMe0 00:20:21.194 5481.560410: reconnect bdev controller NVMe0 00:20:21.194 7481.944394: reconnect delay bdev controller NVMe0 00:20:21.194 7481.974872: reconnect bdev controller NVMe0 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82539 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82530 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82530 ']' 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82530 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.453 21:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82530 00:20:21.453 killing process with pid 82530 00:20:21.453 Received shutdown signal, test time was about 8.233395 seconds 00:20:21.453 00:20:21.453 Latency(us) 00:20:21.453 [2024-12-10T21:47:22.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.453 [2024-12-10T21:47:22.236Z] =================================================================================================================== 00:20:21.453 [2024-12-10T21:47:22.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.453 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:21.453 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:21.453 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82530' 00:20:21.453 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82530 00:20:21.453 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82530 00:20:21.453 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:21.712 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:21.712 rmmod nvme_tcp 00:20:21.712 rmmod nvme_fabrics 00:20:21.971 rmmod nvme_keyring 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82120 ']' 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82120 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82120 ']' 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82120 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82120 00:20:21.971 killing process with pid 82120 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82120' 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82120 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82120 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:21.971 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:22.246 ************************************ 00:20:22.246 END TEST nvmf_timeout 00:20:22.246 ************************************ 00:20:22.246 00:20:22.246 real 0m45.409s 00:20:22.246 user 2m13.824s 00:20:22.246 sys 0m5.286s 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.246 21:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:22.246 21:47:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:22.246 21:47:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:22.246 ************************************ 00:20:22.246 END TEST nvmf_host 00:20:22.246 ************************************ 00:20:22.246 00:20:22.246 real 5m5.955s 00:20:22.246 user 13m25.874s 00:20:22.246 sys 1m7.396s 00:20:22.246 21:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.246 21:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.505 21:47:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:22.505 21:47:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:22.505 ************************************ 00:20:22.505 END TEST nvmf_tcp 00:20:22.505 ************************************ 00:20:22.505 00:20:22.505 real 13m2.276s 00:20:22.505 user 31m41.912s 00:20:22.505 sys 3m7.905s 00:20:22.505 21:47:23 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.505 21:47:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:22.505 21:47:23 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:20:22.505 21:47:23 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:22.505 21:47:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.505 21:47:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.505 21:47:23 -- common/autotest_common.sh@10 -- # set +x 00:20:22.505 ************************************ 00:20:22.505 START TEST nvmf_dif 00:20:22.505 ************************************ 00:20:22.505 21:47:23 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:22.505 * Looking for test storage... 00:20:22.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:22.505 21:47:23 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:22.505 21:47:23 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:20:22.505 21:47:23 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:22.505 21:47:23 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.505 21:47:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:22.764 21:47:23 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.764 21:47:23 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.764 --rc genhtml_branch_coverage=1 00:20:22.764 --rc genhtml_function_coverage=1 00:20:22.764 --rc genhtml_legend=1 00:20:22.764 --rc geninfo_all_blocks=1 00:20:22.764 --rc geninfo_unexecuted_blocks=1 00:20:22.764 00:20:22.764 ' 00:20:22.764 21:47:23 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.764 --rc genhtml_branch_coverage=1 00:20:22.764 --rc genhtml_function_coverage=1 00:20:22.764 --rc genhtml_legend=1 00:20:22.764 --rc geninfo_all_blocks=1 00:20:22.764 --rc geninfo_unexecuted_blocks=1 00:20:22.764 00:20:22.764 ' 00:20:22.764 21:47:23 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.764 --rc genhtml_branch_coverage=1 00:20:22.764 --rc genhtml_function_coverage=1 00:20:22.764 --rc genhtml_legend=1 00:20:22.764 --rc geninfo_all_blocks=1 00:20:22.764 --rc geninfo_unexecuted_blocks=1 00:20:22.764 00:20:22.764 ' 00:20:22.764 21:47:23 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:22.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.764 --rc genhtml_branch_coverage=1 00:20:22.764 --rc genhtml_function_coverage=1 00:20:22.764 --rc genhtml_legend=1 00:20:22.764 --rc geninfo_all_blocks=1 00:20:22.764 --rc geninfo_unexecuted_blocks=1 00:20:22.764 00:20:22.764 ' 00:20:22.764 21:47:23 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.764 21:47:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.764 21:47:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.764 21:47:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.764 21:47:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.764 21:47:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:22.764 21:47:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.764 21:47:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:22.764 21:47:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:22.764 21:47:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:22.764 21:47:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:22.764 21:47:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.764 21:47:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:22.764 21:47:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:22.764 21:47:23 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:22.765 Cannot find device "nvmf_init_br" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:22.765 Cannot find device "nvmf_init_br2" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:22.765 Cannot find device "nvmf_tgt_br" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.765 Cannot find device "nvmf_tgt_br2" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:22.765 Cannot find device "nvmf_init_br" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:22.765 Cannot find device "nvmf_init_br2" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:22.765 Cannot find device "nvmf_tgt_br" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:22.765 Cannot find device "nvmf_tgt_br2" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:22.765 Cannot find device "nvmf_br" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:22.765 Cannot find device "nvmf_init_if" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:22.765 Cannot find device "nvmf_init_if2" 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.765 21:47:23 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.024 21:47:23 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:23.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:23.025 00:20:23.025 --- 10.0.0.3 ping statistics --- 00:20:23.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.025 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:23.025 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:23.025 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:20:23.025 00:20:23.025 --- 10.0.0.4 ping statistics --- 00:20:23.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.025 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:23.025 00:20:23.025 --- 10.0.0.1 ping statistics --- 00:20:23.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.025 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:23.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:23.025 00:20:23.025 --- 10.0.0.2 ping statistics --- 00:20:23.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.025 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:23.025 21:47:23 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:23.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.284 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.284 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:23.543 21:47:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:23.543 21:47:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83068 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:23.543 21:47:24 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83068 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83068 ']' 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.543 21:47:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:23.543 [2024-12-10 21:47:24.180486] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:20:23.543 [2024-12-10 21:47:24.180575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.803 [2024-12-10 21:47:24.336078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.803 [2024-12-10 21:47:24.373493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.803 [2024-12-10 21:47:24.373556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.803 [2024-12-10 21:47:24.373570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.803 [2024-12-10 21:47:24.373580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.803 [2024-12-10 21:47:24.373588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.803 [2024-12-10 21:47:24.373942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.803 [2024-12-10 21:47:24.408366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:20:23.803 21:47:24 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 21:47:24 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.803 21:47:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:23.803 21:47:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 [2024-12-10 21:47:24.503301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.803 21:47:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.803 21:47:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 ************************************ 00:20:23.803 START TEST fio_dif_1_default 00:20:23.803 ************************************ 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 bdev_null0 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:23.803 [2024-12-10 21:47:24.547459] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.803 { 00:20:23.803 "params": { 00:20:23.803 "name": "Nvme$subsystem", 00:20:23.803 "trtype": "$TEST_TRANSPORT", 00:20:23.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.803 "adrfam": "ipv4", 00:20:23.803 "trsvcid": "$NVMF_PORT", 00:20:23.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.803 "hdgst": ${hdgst:-false}, 00:20:23.803 "ddgst": ${ddgst:-false} 00:20:23.803 }, 00:20:23.803 "method": "bdev_nvme_attach_controller" 00:20:23.803 } 00:20:23.803 EOF 00:20:23.803 )") 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:23.803 "params": { 00:20:23.803 "name": "Nvme0", 00:20:23.803 "trtype": "tcp", 00:20:23.803 "traddr": "10.0.0.3", 00:20:23.803 "adrfam": "ipv4", 00:20:23.803 "trsvcid": "4420", 00:20:23.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:23.803 "hdgst": false, 00:20:23.803 "ddgst": false 00:20:23.803 }, 00:20:23.803 "method": "bdev_nvme_attach_controller" 00:20:23.803 }' 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:23.803 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.062 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:24.062 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.062 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:24.062 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:24.062 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:24.062 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.062 21:47:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.062 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:24.062 fio-3.35 00:20:24.062 Starting 1 thread 00:20:36.263 00:20:36.263 filename0: (groupid=0, jobs=1): err= 0: pid=83126: Tue Dec 10 21:47:35 2024 00:20:36.263 read: IOPS=8295, BW=32.4MiB/s (34.0MB/s)(324MiB/10001msec) 00:20:36.263 slat (nsec): min=6949, max=54674, avg=9094.58, stdev=2616.02 00:20:36.263 clat (usec): min=367, max=2172, avg=455.48, stdev=31.33 00:20:36.263 lat (usec): min=374, max=2184, avg=464.58, stdev=31.91 00:20:36.263 clat percentiles (usec): 00:20:36.263 | 1.00th=[ 412], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 437], 00:20:36.263 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 457], 00:20:36.263 | 70.00th=[ 465], 80.00th=[ 469], 90.00th=[ 486], 95.00th=[ 498], 00:20:36.263 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[ 652], 99.95th=[ 668], 00:20:36.263 | 99.99th=[ 1319] 00:20:36.263 bw ( KiB/s): min=31616, max=33696, per=100.00%, avg=33192.42, stdev=515.86, samples=19 00:20:36.263 iops : min= 7904, max= 8424, avg=8298.11, stdev=128.96, samples=19 00:20:36.263 lat (usec) : 500=95.66%, 750=4.33% 00:20:36.263 lat (msec) : 2=0.01%, 4=0.01% 00:20:36.263 cpu : usr=85.53%, sys=12.64%, ctx=61, majf=0, minf=9 00:20:36.263 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.264 issued rwts: total=82968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.264 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:36.264 00:20:36.264 Run status group 0 (all jobs): 00:20:36.264 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=324MiB (340MB), run=10001-10001msec 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 00:20:36.264 real 0m10.908s 00:20:36.264 user 0m9.143s 00:20:36.264 sys 0m1.486s 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 ************************************ 00:20:36.264 END TEST fio_dif_1_default 00:20:36.264 ************************************ 00:20:36.264 21:47:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:36.264 21:47:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.264 21:47:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 ************************************ 00:20:36.264 START TEST fio_dif_1_multi_subsystems 00:20:36.264 ************************************ 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 bdev_null0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 [2024-12-10 21:47:35.503232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 bdev_null1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.264 { 00:20:36.264 "params": { 00:20:36.264 "name": "Nvme$subsystem", 00:20:36.264 "trtype": "$TEST_TRANSPORT", 00:20:36.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.264 "adrfam": "ipv4", 00:20:36.264 "trsvcid": "$NVMF_PORT", 00:20:36.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.264 "hdgst": ${hdgst:-false}, 00:20:36.264 "ddgst": ${ddgst:-false} 00:20:36.264 }, 00:20:36.264 "method": "bdev_nvme_attach_controller" 00:20:36.264 } 00:20:36.264 EOF 00:20:36.264 )") 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:36.264 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.265 { 00:20:36.265 "params": { 00:20:36.265 "name": "Nvme$subsystem", 00:20:36.265 "trtype": "$TEST_TRANSPORT", 00:20:36.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.265 "adrfam": "ipv4", 00:20:36.265 "trsvcid": "$NVMF_PORT", 00:20:36.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.265 "hdgst": ${hdgst:-false}, 00:20:36.265 "ddgst": ${ddgst:-false} 00:20:36.265 }, 00:20:36.265 "method": "bdev_nvme_attach_controller" 00:20:36.265 } 00:20:36.265 EOF 00:20:36.265 )") 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:36.265 "params": { 00:20:36.265 "name": "Nvme0", 00:20:36.265 "trtype": "tcp", 00:20:36.265 "traddr": "10.0.0.3", 00:20:36.265 "adrfam": "ipv4", 00:20:36.265 "trsvcid": "4420", 00:20:36.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:36.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:36.265 "hdgst": false, 00:20:36.265 "ddgst": false 00:20:36.265 }, 00:20:36.265 "method": "bdev_nvme_attach_controller" 00:20:36.265 },{ 00:20:36.265 "params": { 00:20:36.265 "name": "Nvme1", 00:20:36.265 "trtype": "tcp", 00:20:36.265 "traddr": "10.0.0.3", 00:20:36.265 "adrfam": "ipv4", 00:20:36.265 "trsvcid": "4420", 00:20:36.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.265 "hdgst": false, 00:20:36.265 "ddgst": false 00:20:36.265 }, 00:20:36.265 "method": "bdev_nvme_attach_controller" 00:20:36.265 }' 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.265 21:47:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.265 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:36.265 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:36.265 fio-3.35 00:20:36.265 Starting 2 threads 00:20:46.258 00:20:46.258 filename0: (groupid=0, jobs=1): err= 0: pid=83281: Tue Dec 10 21:47:46 2024 00:20:46.258 read: IOPS=4588, BW=17.9MiB/s (18.8MB/s)(179MiB/10001msec) 00:20:46.258 slat (nsec): min=6160, max=80376, avg=14017.36, stdev=3654.20 00:20:46.258 clat (usec): min=694, max=2368, avg=832.36, stdev=33.55 00:20:46.258 lat (usec): min=704, max=2394, avg=846.37, stdev=34.12 00:20:46.258 clat percentiles (usec): 00:20:46.258 | 1.00th=[ 775], 5.00th=[ 791], 10.00th=[ 799], 20.00th=[ 807], 00:20:46.258 | 30.00th=[ 816], 40.00th=[ 824], 50.00th=[ 832], 60.00th=[ 840], 00:20:46.258 | 70.00th=[ 848], 80.00th=[ 857], 90.00th=[ 865], 95.00th=[ 881], 00:20:46.258 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 1057], 99.95th=[ 1123], 00:20:46.258 | 99.99th=[ 1483] 00:20:46.258 bw ( KiB/s): min=18208, max=18592, per=50.05%, avg=18371.37, stdev=111.82, samples=19 00:20:46.258 iops : min= 4552, max= 4648, avg=4592.84, stdev=27.95, samples=19 00:20:46.258 lat (usec) : 750=0.11%, 1000=99.70% 00:20:46.258 lat (msec) : 2=0.18%, 4=0.01% 00:20:46.258 cpu : usr=90.06%, sys=8.48%, ctx=53, majf=0, minf=0 00:20:46.258 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.258 issued rwts: total=45888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.258 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:46.258 filename1: (groupid=0, jobs=1): err= 0: pid=83282: Tue Dec 10 21:47:46 2024 00:20:46.258 read: IOPS=4588, BW=17.9MiB/s (18.8MB/s)(179MiB/10001msec) 00:20:46.258 slat (nsec): min=7781, max=77303, avg=13852.43, stdev=3517.14 00:20:46.258 clat (usec): min=593, max=2220, avg=834.08, stdev=42.93 00:20:46.258 lat (usec): min=611, max=2247, avg=847.93, stdev=43.97 00:20:46.258 clat percentiles (usec): 00:20:46.258 | 1.00th=[ 734], 5.00th=[ 758], 10.00th=[ 775], 20.00th=[ 807], 00:20:46.258 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[ 840], 60.00th=[ 848], 00:20:46.258 | 70.00th=[ 857], 80.00th=[ 865], 90.00th=[ 881], 95.00th=[ 889], 00:20:46.258 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 1074], 99.95th=[ 1123], 00:20:46.258 | 99.99th=[ 1467] 00:20:46.258 bw ( KiB/s): min=18208, max=18592, per=50.05%, avg=18371.37, stdev=111.82, samples=19 00:20:46.258 iops : min= 4552, max= 4648, avg=4592.84, stdev=27.95, samples=19 00:20:46.258 lat (usec) : 750=3.29%, 1000=96.50% 00:20:46.258 lat (msec) : 2=0.20%, 4=0.01% 00:20:46.258 cpu : usr=89.94%, sys=8.65%, ctx=85, majf=0, minf=0 00:20:46.258 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.258 issued rwts: total=45888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.258 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:46.258 00:20:46.258 Run status group 0 (all jobs): 00:20:46.258 READ: bw=35.8MiB/s (37.6MB/s), 17.9MiB/s-17.9MiB/s (18.8MB/s-18.8MB/s), io=359MiB (376MB), run=10001-10001msec 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 00:20:46.258 real 0m11.047s 00:20:46.258 user 0m18.696s 00:20:46.258 sys 0m1.947s 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.258 21:47:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 ************************************ 00:20:46.258 END TEST fio_dif_1_multi_subsystems 00:20:46.259 ************************************ 00:20:46.259 21:47:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:46.259 21:47:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.259 21:47:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.259 21:47:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 ************************************ 00:20:46.259 START TEST fio_dif_rand_params 00:20:46.259 ************************************ 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 bdev_null0 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 [2024-12-10 21:47:46.605997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.259 { 00:20:46.259 "params": { 00:20:46.259 "name": "Nvme$subsystem", 00:20:46.259 "trtype": "$TEST_TRANSPORT", 00:20:46.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.259 "adrfam": "ipv4", 00:20:46.259 "trsvcid": "$NVMF_PORT", 00:20:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.259 "hdgst": ${hdgst:-false}, 00:20:46.259 "ddgst": ${ddgst:-false} 00:20:46.259 }, 00:20:46.259 "method": "bdev_nvme_attach_controller" 00:20:46.259 } 00:20:46.259 EOF 00:20:46.259 )") 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.259 "params": { 00:20:46.259 "name": "Nvme0", 00:20:46.259 "trtype": "tcp", 00:20:46.259 "traddr": "10.0.0.3", 00:20:46.259 "adrfam": "ipv4", 00:20:46.259 "trsvcid": "4420", 00:20:46.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:46.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:46.259 "hdgst": false, 00:20:46.259 "ddgst": false 00:20:46.259 }, 00:20:46.259 "method": "bdev_nvme_attach_controller" 00:20:46.259 }' 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:46.259 21:47:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.259 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:46.259 ... 00:20:46.259 fio-3.35 00:20:46.259 Starting 3 threads 00:20:52.823 00:20:52.823 filename0: (groupid=0, jobs=1): err= 0: pid=83438: Tue Dec 10 21:47:52 2024 00:20:52.823 read: IOPS=243, BW=30.5MiB/s (32.0MB/s)(153MiB/5005msec) 00:20:52.823 slat (nsec): min=4728, max=44978, avg=16400.09, stdev=5382.32 00:20:52.823 clat (usec): min=12061, max=18414, avg=12258.17, stdev=314.00 00:20:52.823 lat (usec): min=12076, max=18442, avg=12274.57, stdev=314.32 00:20:52.823 clat percentiles (usec): 00:20:52.823 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12125], 20.00th=[12125], 00:20:52.823 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:20:52.823 | 70.00th=[12256], 80.00th=[12256], 90.00th=[12387], 95.00th=[12387], 00:20:52.823 | 99.00th=[12518], 99.50th=[12649], 99.90th=[18482], 99.95th=[18482], 00:20:52.823 | 99.99th=[18482] 00:20:52.823 bw ( KiB/s): min=30720, max=31488, per=33.28%, avg=31180.80, stdev=396.59, samples=10 00:20:52.823 iops : min= 240, max= 246, avg=243.60, stdev= 3.10, samples=10 00:20:52.823 lat (msec) : 20=100.00% 00:20:52.823 cpu : usr=91.37%, sys=8.05%, ctx=12, majf=0, minf=0 00:20:52.823 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.823 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.823 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:52.823 filename0: (groupid=0, jobs=1): err= 0: pid=83439: Tue Dec 10 21:47:52 2024 00:20:52.823 read: IOPS=244, BW=30.5MiB/s (32.0MB/s)(153MiB/5001msec) 00:20:52.823 slat (nsec): min=8165, max=45680, avg=17107.73, stdev=4859.84 00:20:52.823 clat (usec): min=11702, max=15193, avg=12248.37, stdev=163.08 00:20:52.823 lat (usec): min=11710, max=15218, avg=12265.48, stdev=163.68 00:20:52.823 clat percentiles (usec): 00:20:52.823 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12125], 20.00th=[12256], 00:20:52.823 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:20:52.823 | 70.00th=[12256], 80.00th=[12256], 90.00th=[12256], 95.00th=[12387], 00:20:52.823 | 99.00th=[12518], 99.50th=[12649], 99.90th=[15139], 99.95th=[15139], 00:20:52.823 | 99.99th=[15139] 00:20:52.823 bw ( KiB/s): min=30720, max=31488, per=33.35%, avg=31238.78, stdev=374.25, samples=9 00:20:52.823 iops : min= 240, max= 246, avg=244.00, stdev= 3.00, samples=9 00:20:52.823 lat (msec) : 20=100.00% 00:20:52.823 cpu : usr=91.24%, sys=8.20%, ctx=16, majf=0, minf=0 00:20:52.823 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.823 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.823 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:52.823 filename0: (groupid=0, jobs=1): err= 0: pid=83440: Tue Dec 10 21:47:52 2024 00:20:52.823 read: IOPS=244, BW=30.5MiB/s (32.0MB/s)(153MiB/5003msec) 00:20:52.823 slat (nsec): min=4992, max=49693, avg=17086.30, stdev=4796.73 00:20:52.823 clat (usec): min=12081, max=17194, avg=12253.79, stdev=254.71 00:20:52.823 lat (usec): min=12096, max=17211, avg=12270.87, stdev=254.84 00:20:52.824 clat percentiles (usec): 00:20:52.824 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12125], 20.00th=[12125], 00:20:52.824 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12256], 60.00th=[12256], 00:20:52.824 | 70.00th=[12256], 80.00th=[12256], 90.00th=[12387], 95.00th=[12387], 00:20:52.824 | 99.00th=[12518], 99.50th=[12518], 99.90th=[17171], 99.95th=[17171], 00:20:52.824 | 99.99th=[17171] 00:20:52.824 bw ( KiB/s): min=30658, max=31488, per=33.33%, avg=31225.11, stdev=394.74, samples=9 00:20:52.824 iops : min= 239, max= 246, avg=243.89, stdev= 3.18, samples=9 00:20:52.824 lat (msec) : 20=100.00% 00:20:52.824 cpu : usr=91.68%, sys=7.78%, ctx=11, majf=0, minf=0 00:20:52.824 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.824 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:52.824 00:20:52.824 Run status group 0 (all jobs): 00:20:52.824 READ: bw=91.5MiB/s (95.9MB/s), 30.5MiB/s-30.5MiB/s (32.0MB/s-32.0MB/s), io=458MiB (480MB), run=5001-5005msec 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 bdev_null0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 [2024-12-10 21:47:52.564825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 bdev_null1 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 bdev_null2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.824 { 00:20:52.824 "params": { 00:20:52.824 "name": "Nvme$subsystem", 00:20:52.824 "trtype": "$TEST_TRANSPORT", 00:20:52.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.824 "adrfam": "ipv4", 00:20:52.824 "trsvcid": "$NVMF_PORT", 00:20:52.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.824 "hdgst": ${hdgst:-false}, 00:20:52.824 "ddgst": ${ddgst:-false} 00:20:52.824 }, 00:20:52.824 "method": "bdev_nvme_attach_controller" 00:20:52.824 } 00:20:52.824 EOF 00:20:52.824 )") 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.824 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.825 { 00:20:52.825 "params": { 00:20:52.825 "name": "Nvme$subsystem", 00:20:52.825 "trtype": "$TEST_TRANSPORT", 00:20:52.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.825 "adrfam": "ipv4", 00:20:52.825 "trsvcid": "$NVMF_PORT", 00:20:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.825 "hdgst": ${hdgst:-false}, 00:20:52.825 "ddgst": ${ddgst:-false} 00:20:52.825 }, 00:20:52.825 "method": "bdev_nvme_attach_controller" 00:20:52.825 } 00:20:52.825 EOF 00:20:52.825 )") 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.825 { 00:20:52.825 "params": { 00:20:52.825 "name": "Nvme$subsystem", 00:20:52.825 "trtype": "$TEST_TRANSPORT", 00:20:52.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.825 "adrfam": "ipv4", 00:20:52.825 "trsvcid": "$NVMF_PORT", 00:20:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.825 "hdgst": ${hdgst:-false}, 00:20:52.825 "ddgst": ${ddgst:-false} 00:20:52.825 }, 00:20:52.825 "method": "bdev_nvme_attach_controller" 00:20:52.825 } 00:20:52.825 EOF 00:20:52.825 )") 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:52.825 "params": { 00:20:52.825 "name": "Nvme0", 00:20:52.825 "trtype": "tcp", 00:20:52.825 "traddr": "10.0.0.3", 00:20:52.825 "adrfam": "ipv4", 00:20:52.825 "trsvcid": "4420", 00:20:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:52.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:52.825 "hdgst": false, 00:20:52.825 "ddgst": false 00:20:52.825 }, 00:20:52.825 "method": "bdev_nvme_attach_controller" 00:20:52.825 },{ 00:20:52.825 "params": { 00:20:52.825 "name": "Nvme1", 00:20:52.825 "trtype": "tcp", 00:20:52.825 "traddr": "10.0.0.3", 00:20:52.825 "adrfam": "ipv4", 00:20:52.825 "trsvcid": "4420", 00:20:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.825 "hdgst": false, 00:20:52.825 "ddgst": false 00:20:52.825 }, 00:20:52.825 "method": "bdev_nvme_attach_controller" 00:20:52.825 },{ 00:20:52.825 "params": { 00:20:52.825 "name": "Nvme2", 00:20:52.825 "trtype": "tcp", 00:20:52.825 "traddr": "10.0.0.3", 00:20:52.825 "adrfam": "ipv4", 00:20:52.825 "trsvcid": "4420", 00:20:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:52.825 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:52.825 "hdgst": false, 00:20:52.825 "ddgst": false 00:20:52.825 }, 00:20:52.825 "method": "bdev_nvme_attach_controller" 00:20:52.825 }' 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:52.825 21:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.825 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:52.825 ... 00:20:52.825 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:52.825 ... 00:20:52.825 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:52.825 ... 00:20:52.825 fio-3.35 00:20:52.825 Starting 24 threads 00:21:05.024 00:21:05.024 filename0: (groupid=0, jobs=1): err= 0: pid=83535: Tue Dec 10 21:48:03 2024 00:21:05.024 read: IOPS=241, BW=964KiB/s (987kB/s)(9648KiB/10007msec) 00:21:05.024 slat (usec): min=8, max=4033, avg=23.05, stdev=154.47 00:21:05.024 clat (msec): min=7, max=121, avg=66.28, stdev=20.10 00:21:05.024 lat (msec): min=7, max=121, avg=66.30, stdev=20.10 00:21:05.024 clat percentiles (msec): 00:21:05.024 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 50], 00:21:05.024 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 67], 60.00th=[ 73], 00:21:05.024 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 100], 00:21:05.024 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 123], 99.95th=[ 123], 00:21:05.024 | 99.99th=[ 123] 00:21:05.024 bw ( KiB/s): min= 768, max= 1515, per=4.27%, avg=945.00, stdev=153.34, samples=19 00:21:05.024 iops : min= 192, max= 378, avg=236.21, stdev=38.18, samples=19 00:21:05.024 lat (msec) : 10=1.08%, 20=0.70%, 50=20.69%, 100=73.13%, 250=4.39% 00:21:05.024 cpu : usr=39.91%, sys=2.52%, ctx=1231, majf=0, minf=9 00:21:05.024 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=83.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:05.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.024 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.024 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.024 filename0: (groupid=0, jobs=1): err= 0: pid=83536: Tue Dec 10 21:48:03 2024 00:21:05.024 read: IOPS=227, BW=910KiB/s (932kB/s)(9152KiB/10056msec) 00:21:05.024 slat (usec): min=8, max=8044, avg=34.56, stdev=325.87 00:21:05.024 clat (msec): min=11, max=125, avg=69.99, stdev=19.67 00:21:05.024 lat (msec): min=11, max=125, avg=70.02, stdev=19.66 00:21:05.024 clat percentiles (msec): 00:21:05.024 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 52], 00:21:05.024 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 78], 00:21:05.024 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 101], 00:21:05.024 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 126], 99.95th=[ 126], 00:21:05.024 | 99.99th=[ 126] 00:21:05.024 bw ( KiB/s): min= 720, max= 1680, per=4.12%, avg=911.60, stdev=191.64, samples=20 00:21:05.024 iops : min= 180, max= 420, avg=227.90, stdev=47.91, samples=20 00:21:05.024 lat (msec) : 20=1.40%, 50=16.22%, 100=77.36%, 250=5.03% 00:21:05.024 cpu : usr=37.55%, sys=2.53%, ctx=1186, majf=0, minf=9 00:21:05.024 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=81.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:05.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.024 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.024 issued rwts: total=2288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.024 filename0: (groupid=0, jobs=1): err= 0: pid=83537: Tue Dec 10 21:48:03 2024 00:21:05.024 read: IOPS=222, BW=891KiB/s (913kB/s)(8940KiB/10030msec) 00:21:05.025 slat (usec): min=4, max=8040, avg=22.48, stdev=200.85 00:21:05.025 clat (msec): min=21, max=136, avg=71.62, stdev=20.84 00:21:05.025 lat (msec): min=21, max=136, avg=71.65, stdev=20.85 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 53], 00:21:05.025 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:21:05.025 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 103], 00:21:05.025 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 134], 00:21:05.025 | 99.99th=[ 138] 00:21:05.025 bw ( KiB/s): min= 656, max= 1776, per=4.02%, avg=889.25, stdev=224.27, samples=20 00:21:05.025 iops : min= 164, max= 444, avg=222.30, stdev=56.06, samples=20 00:21:05.025 lat (msec) : 50=16.20%, 100=78.34%, 250=5.46% 00:21:05.025 cpu : usr=40.27%, sys=3.02%, ctx=1184, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.025 filename0: (groupid=0, jobs=1): err= 0: pid=83538: Tue Dec 10 21:48:03 2024 00:21:05.025 read: IOPS=234, BW=940KiB/s (963kB/s)(9408KiB/10009msec) 00:21:05.025 slat (usec): min=5, max=8042, avg=29.11, stdev=330.39 00:21:05.025 clat (msec): min=10, max=131, avg=67.94, stdev=19.48 00:21:05.025 lat (msec): min=10, max=131, avg=67.97, stdev=19.49 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 49], 00:21:05.025 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:21:05.025 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 96], 00:21:05.025 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:21:05.025 | 99.99th=[ 132] 00:21:05.025 bw ( KiB/s): min= 720, max= 1484, per=4.21%, avg=930.32, stdev=152.03, samples=19 00:21:05.025 iops : min= 180, max= 371, avg=232.58, stdev=38.01, samples=19 00:21:05.025 lat (msec) : 20=0.64%, 50=22.79%, 100=72.75%, 250=3.83% 00:21:05.025 cpu : usr=30.99%, sys=2.23%, ctx=858, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.025 filename0: (groupid=0, jobs=1): err= 0: pid=83539: Tue Dec 10 21:48:03 2024 00:21:05.025 read: IOPS=236, BW=945KiB/s (968kB/s)(9464KiB/10013msec) 00:21:05.025 slat (usec): min=5, max=8041, avg=56.83, stdev=569.59 00:21:05.025 clat (msec): min=21, max=128, avg=67.45, stdev=18.56 00:21:05.025 lat (msec): min=21, max=128, avg=67.51, stdev=18.56 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 49], 00:21:05.025 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:21:05.025 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 97], 00:21:05.025 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:21:05.025 | 99.99th=[ 129] 00:21:05.025 bw ( KiB/s): min= 768, max= 1400, per=4.25%, avg=940.63, stdev=129.97, samples=19 00:21:05.025 iops : min= 192, max= 350, avg=235.16, stdev=32.49, samples=19 00:21:05.025 lat (msec) : 50=24.81%, 100=71.09%, 250=4.10% 00:21:05.025 cpu : usr=31.23%, sys=1.96%, ctx=847, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.025 filename0: (groupid=0, jobs=1): err= 0: pid=83540: Tue Dec 10 21:48:03 2024 00:21:05.025 read: IOPS=219, BW=880KiB/s (901kB/s)(8840KiB/10049msec) 00:21:05.025 slat (usec): min=7, max=10030, avg=26.18, stdev=262.70 00:21:05.025 clat (msec): min=15, max=143, avg=72.50, stdev=21.38 00:21:05.025 lat (msec): min=15, max=143, avg=72.52, stdev=21.38 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 55], 00:21:05.025 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 00:21:05.025 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 106], 00:21:05.025 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 140], 99.95th=[ 140], 00:21:05.025 | 99.99th=[ 144] 00:21:05.025 bw ( KiB/s): min= 640, max= 1792, per=3.98%, avg=880.40, stdev=226.96, samples=20 00:21:05.025 iops : min= 160, max= 448, avg=220.10, stdev=56.74, samples=20 00:21:05.025 lat (msec) : 20=1.36%, 50=14.25%, 100=76.20%, 250=8.19% 00:21:05.025 cpu : usr=39.60%, sys=2.88%, ctx=1204, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=1.0%, 4=4.3%, 8=78.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=88.8%, 8=10.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.025 filename0: (groupid=0, jobs=1): err= 0: pid=83541: Tue Dec 10 21:48:03 2024 00:21:05.025 read: IOPS=243, BW=973KiB/s (997kB/s)(9736KiB/10002msec) 00:21:05.025 slat (usec): min=7, max=8026, avg=25.48, stdev=256.86 00:21:05.025 clat (msec): min=2, max=121, avg=65.62, stdev=20.96 00:21:05.025 lat (msec): min=2, max=121, avg=65.65, stdev=20.95 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 00:21:05.025 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 74], 00:21:05.025 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 90], 95.00th=[ 99], 00:21:05.025 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:21:05.025 | 99.99th=[ 123] 00:21:05.025 bw ( KiB/s): min= 720, max= 1426, per=4.27%, avg=945.37, stdev=137.80, samples=19 00:21:05.025 iops : min= 180, max= 356, avg=236.32, stdev=34.35, samples=19 00:21:05.025 lat (msec) : 4=0.45%, 10=1.93%, 20=0.37%, 50=19.39%, 100=73.58% 00:21:05.025 lat (msec) : 250=4.27% 00:21:05.025 cpu : usr=40.43%, sys=2.69%, ctx=1341, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.025 filename0: (groupid=0, jobs=1): err= 0: pid=83542: Tue Dec 10 21:48:03 2024 00:21:05.025 read: IOPS=240, BW=962KiB/s (985kB/s)(9620KiB/10004msec) 00:21:05.025 slat (usec): min=8, max=8031, avg=22.45, stdev=231.12 00:21:05.025 clat (msec): min=2, max=120, avg=66.46, stdev=20.62 00:21:05.025 lat (msec): min=2, max=120, avg=66.48, stdev=20.62 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:21:05.025 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 73], 00:21:05.025 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 90], 95.00th=[ 99], 00:21:05.025 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:21:05.025 | 99.99th=[ 121] 00:21:05.025 bw ( KiB/s): min= 768, max= 1296, per=4.22%, avg=933.05, stdev=109.70, samples=19 00:21:05.025 iops : min= 192, max= 324, avg=233.26, stdev=27.42, samples=19 00:21:05.025 lat (msec) : 4=0.37%, 10=1.87%, 20=0.37%, 50=22.37%, 100=70.40% 00:21:05.025 lat (msec) : 250=4.62% 00:21:05.025 cpu : usr=33.45%, sys=2.12%, ctx=1023, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.025 filename1: (groupid=0, jobs=1): err= 0: pid=83543: Tue Dec 10 21:48:03 2024 00:21:05.025 read: IOPS=221, BW=886KiB/s (907kB/s)(8896KiB/10046msec) 00:21:05.025 slat (usec): min=6, max=8031, avg=27.93, stdev=302.98 00:21:05.025 clat (msec): min=14, max=134, avg=72.08, stdev=19.97 00:21:05.025 lat (msec): min=14, max=134, avg=72.11, stdev=19.96 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 54], 00:21:05.025 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:21:05.025 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:21:05.025 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 128], 99.95th=[ 132], 00:21:05.025 | 99.99th=[ 136] 00:21:05.025 bw ( KiB/s): min= 712, max= 1664, per=3.99%, avg=883.20, stdev=195.21, samples=20 00:21:05.025 iops : min= 178, max= 416, avg=220.80, stdev=48.80, samples=20 00:21:05.025 lat (msec) : 20=0.09%, 50=16.23%, 100=76.71%, 250=6.97% 00:21:05.025 cpu : usr=37.11%, sys=2.43%, ctx=1154, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.025 filename1: (groupid=0, jobs=1): err= 0: pid=83544: Tue Dec 10 21:48:03 2024 00:21:05.025 read: IOPS=230, BW=922KiB/s (944kB/s)(9276KiB/10064msec) 00:21:05.025 slat (usec): min=4, max=4019, avg=18.70, stdev=93.38 00:21:05.025 clat (msec): min=9, max=128, avg=69.22, stdev=21.83 00:21:05.025 lat (msec): min=9, max=128, avg=69.24, stdev=21.83 00:21:05.025 clat percentiles (msec): 00:21:05.025 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 53], 00:21:05.025 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 79], 00:21:05.025 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 102], 00:21:05.025 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:21:05.025 | 99.99th=[ 129] 00:21:05.025 bw ( KiB/s): min= 656, max= 2168, per=4.17%, avg=921.20, stdev=301.98, samples=20 00:21:05.025 iops : min= 164, max= 542, avg=230.30, stdev=75.49, samples=20 00:21:05.025 lat (msec) : 10=0.09%, 20=2.63%, 50=15.39%, 100=76.58%, 250=5.30% 00:21:05.025 cpu : usr=42.07%, sys=2.85%, ctx=1353, majf=0, minf=9 00:21:05.025 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:05.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.025 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename1: (groupid=0, jobs=1): err= 0: pid=83545: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=230, BW=921KiB/s (943kB/s)(9264KiB/10056msec) 00:21:05.026 slat (usec): min=4, max=8025, avg=18.36, stdev=166.54 00:21:05.026 clat (msec): min=3, max=143, avg=69.23, stdev=23.96 00:21:05.026 lat (msec): min=3, max=144, avg=69.25, stdev=23.96 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 52], 00:21:05.026 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 80], 00:21:05.026 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 103], 00:21:05.026 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 131], 00:21:05.026 | 99.99th=[ 144] 00:21:05.026 bw ( KiB/s): min= 688, max= 2472, per=4.17%, avg=922.30, stdev=368.85, samples=20 00:21:05.026 iops : min= 172, max= 618, avg=230.55, stdev=92.22, samples=20 00:21:05.026 lat (msec) : 4=0.69%, 10=1.94%, 20=3.02%, 50=12.65%, 100=75.17% 00:21:05.026 lat (msec) : 250=6.52% 00:21:05.026 cpu : usr=36.65%, sys=2.47%, ctx=1222, majf=0, minf=0 00:21:05.026 IO depths : 1=0.2%, 2=0.9%, 4=2.9%, 8=79.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 complete : 0=0.0%, 4=88.5%, 8=10.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename1: (groupid=0, jobs=1): err= 0: pid=83546: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=229, BW=916KiB/s (938kB/s)(9196KiB/10039msec) 00:21:05.026 slat (usec): min=8, max=8027, avg=26.24, stdev=289.27 00:21:05.026 clat (msec): min=20, max=143, avg=69.72, stdev=19.67 00:21:05.026 lat (msec): min=20, max=143, avg=69.75, stdev=19.67 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 51], 00:21:05.026 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 75], 00:21:05.026 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 105], 00:21:05.026 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 134], 00:21:05.026 | 99.99th=[ 144] 00:21:05.026 bw ( KiB/s): min= 736, max= 1680, per=4.13%, avg=913.20, stdev=191.51, samples=20 00:21:05.026 iops : min= 184, max= 420, avg=228.30, stdev=47.88, samples=20 00:21:05.026 lat (msec) : 50=19.92%, 100=74.64%, 250=5.44% 00:21:05.026 cpu : usr=31.15%, sys=2.13%, ctx=866, majf=0, minf=9 00:21:05.026 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 issued rwts: total=2299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename1: (groupid=0, jobs=1): err= 0: pid=83547: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=234, BW=937KiB/s (959kB/s)(9392KiB/10026msec) 00:21:05.026 slat (usec): min=4, max=8027, avg=19.71, stdev=165.40 00:21:05.026 clat (msec): min=13, max=133, avg=68.19, stdev=19.89 00:21:05.026 lat (msec): min=13, max=133, avg=68.21, stdev=19.89 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:21:05.026 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 74], 00:21:05.026 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 99], 00:21:05.026 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:21:05.026 | 99.99th=[ 134] 00:21:05.026 bw ( KiB/s): min= 736, max= 1777, per=4.22%, avg=934.15, stdev=209.46, samples=20 00:21:05.026 iops : min= 184, max= 444, avg=233.50, stdev=52.31, samples=20 00:21:05.026 lat (msec) : 20=0.04%, 50=23.21%, 100=72.57%, 250=4.17% 00:21:05.026 cpu : usr=31.17%, sys=1.99%, ctx=852, majf=0, minf=9 00:21:05.026 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename1: (groupid=0, jobs=1): err= 0: pid=83548: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=229, BW=918KiB/s (940kB/s)(9212KiB/10034msec) 00:21:05.026 slat (usec): min=8, max=8049, avg=30.32, stdev=305.05 00:21:05.026 clat (msec): min=21, max=120, avg=69.56, stdev=18.44 00:21:05.026 lat (msec): min=21, max=120, avg=69.60, stdev=18.44 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 53], 00:21:05.026 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 77], 00:21:05.026 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 92], 95.00th=[ 103], 00:21:05.026 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 122], 00:21:05.026 | 99.99th=[ 122] 00:21:05.026 bw ( KiB/s): min= 736, max= 1424, per=4.13%, avg=914.80, stdev=136.88, samples=20 00:21:05.026 iops : min= 184, max= 356, avg=228.70, stdev=34.22, samples=20 00:21:05.026 lat (msec) : 50=16.85%, 100=77.29%, 250=5.86% 00:21:05.026 cpu : usr=38.22%, sys=2.56%, ctx=1471, majf=0, minf=9 00:21:05.026 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename1: (groupid=0, jobs=1): err= 0: pid=83549: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=229, BW=916KiB/s (938kB/s)(9208KiB/10051msec) 00:21:05.026 slat (usec): min=7, max=8027, avg=27.66, stdev=259.31 00:21:05.026 clat (msec): min=11, max=126, avg=69.58, stdev=20.94 00:21:05.026 lat (msec): min=11, max=126, avg=69.61, stdev=20.93 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 43], 20.00th=[ 52], 00:21:05.026 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 79], 00:21:05.026 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 102], 00:21:05.026 | 99.00th=[ 117], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 125], 00:21:05.026 | 99.99th=[ 127] 00:21:05.026 bw ( KiB/s): min= 720, max= 1928, per=4.15%, avg=917.20, stdev=250.00, samples=20 00:21:05.026 iops : min= 180, max= 482, avg=229.30, stdev=62.50, samples=20 00:21:05.026 lat (msec) : 20=0.78%, 50=16.85%, 100=77.11%, 250=5.26% 00:21:05.026 cpu : usr=39.67%, sys=2.71%, ctx=1528, majf=0, minf=9 00:21:05.026 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 issued rwts: total=2302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename1: (groupid=0, jobs=1): err= 0: pid=83550: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=236, BW=947KiB/s (970kB/s)(9492KiB/10018msec) 00:21:05.026 slat (usec): min=8, max=8025, avg=19.99, stdev=164.51 00:21:05.026 clat (msec): min=12, max=120, avg=67.44, stdev=19.59 00:21:05.026 lat (msec): min=13, max=120, avg=67.46, stdev=19.59 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:21:05.026 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 74], 00:21:05.026 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 92], 95.00th=[ 102], 00:21:05.026 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 122], 00:21:05.026 | 99.99th=[ 122] 00:21:05.026 bw ( KiB/s): min= 744, max= 1664, per=4.27%, avg=945.20, stdev=187.33, samples=20 00:21:05.026 iops : min= 186, max= 416, avg=236.30, stdev=46.83, samples=20 00:21:05.026 lat (msec) : 20=0.04%, 50=22.04%, 100=72.52%, 250=5.39% 00:21:05.026 cpu : usr=37.79%, sys=2.54%, ctx=1217, majf=0, minf=9 00:21:05.026 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 issued rwts: total=2373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename2: (groupid=0, jobs=1): err= 0: pid=83551: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=235, BW=941KiB/s (964kB/s)(9412KiB/10002msec) 00:21:05.026 slat (usec): min=7, max=8043, avg=25.68, stdev=286.20 00:21:05.026 clat (msec): min=2, max=120, avg=67.92, stdev=20.20 00:21:05.026 lat (msec): min=2, max=120, avg=67.95, stdev=20.20 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 49], 00:21:05.026 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 73], 00:21:05.026 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 97], 00:21:05.026 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 122], 00:21:05.026 | 99.99th=[ 122] 00:21:05.026 bw ( KiB/s): min= 744, max= 1218, per=4.15%, avg=917.16, stdev=95.80, samples=19 00:21:05.026 iops : min= 186, max= 304, avg=229.26, stdev=23.86, samples=19 00:21:05.026 lat (msec) : 4=0.38%, 10=1.91%, 20=0.30%, 50=20.95%, 100=72.33% 00:21:05.026 lat (msec) : 250=4.12% 00:21:05.026 cpu : usr=31.23%, sys=2.05%, ctx=862, majf=0, minf=9 00:21:05.026 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.026 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.026 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.026 filename2: (groupid=0, jobs=1): err= 0: pid=83552: Tue Dec 10 21:48:03 2024 00:21:05.026 read: IOPS=231, BW=925KiB/s (947kB/s)(9300KiB/10052msec) 00:21:05.026 slat (usec): min=5, max=8027, avg=18.04, stdev=166.27 00:21:05.026 clat (usec): min=1536, max=131809, avg=68942.15, stdev=24640.20 00:21:05.026 lat (usec): min=1547, max=131824, avg=68960.19, stdev=24641.06 00:21:05.026 clat percentiles (msec): 00:21:05.026 | 1.00th=[ 4], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 50], 00:21:05.026 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:21:05.026 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:21:05.026 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:21:05.026 | 99.99th=[ 132] 00:21:05.026 bw ( KiB/s): min= 712, max= 2650, per=4.18%, avg=925.30, stdev=410.86, samples=20 00:21:05.026 iops : min= 178, max= 662, avg=231.30, stdev=102.60, samples=20 00:21:05.026 lat (msec) : 2=0.69%, 4=1.98%, 10=1.29%, 20=1.55%, 50=15.53% 00:21:05.026 lat (msec) : 100=73.16%, 250=5.81% 00:21:05.026 cpu : usr=31.60%, sys=1.77%, ctx=860, majf=0, minf=9 00:21:05.026 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.3%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:05.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.027 filename2: (groupid=0, jobs=1): err= 0: pid=83553: Tue Dec 10 21:48:03 2024 00:21:05.027 read: IOPS=225, BW=901KiB/s (922kB/s)(9020KiB/10014msec) 00:21:05.027 slat (usec): min=8, max=8027, avg=24.22, stdev=211.15 00:21:05.027 clat (msec): min=23, max=124, avg=70.92, stdev=18.93 00:21:05.027 lat (msec): min=23, max=124, avg=70.94, stdev=18.93 00:21:05.027 clat percentiles (msec): 00:21:05.027 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:21:05.027 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 79], 00:21:05.027 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 104], 00:21:05.027 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 125], 99.95th=[ 125], 00:21:05.027 | 99.99th=[ 125] 00:21:05.027 bw ( KiB/s): min= 656, max= 1168, per=4.06%, avg=897.90, stdev=102.72, samples=20 00:21:05.027 iops : min= 164, max= 292, avg=224.45, stdev=25.68, samples=20 00:21:05.027 lat (msec) : 50=16.27%, 100=77.16%, 250=6.56% 00:21:05.027 cpu : usr=40.90%, sys=2.61%, ctx=1395, majf=0, minf=9 00:21:05.027 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:05.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.027 filename2: (groupid=0, jobs=1): err= 0: pid=83554: Tue Dec 10 21:48:03 2024 00:21:05.027 read: IOPS=221, BW=887KiB/s (908kB/s)(8916KiB/10053msec) 00:21:05.027 slat (usec): min=8, max=4151, avg=26.67, stdev=209.39 00:21:05.027 clat (msec): min=22, max=138, avg=71.88, stdev=19.72 00:21:05.027 lat (msec): min=22, max=138, avg=71.90, stdev=19.71 00:21:05.027 clat percentiles (msec): 00:21:05.027 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 55], 00:21:05.027 | 30.00th=[ 60], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 80], 00:21:05.027 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 107], 00:21:05.027 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 138], 00:21:05.027 | 99.99th=[ 138] 00:21:05.027 bw ( KiB/s): min= 640, max= 1536, per=4.02%, avg=888.00, stdev=170.00, samples=20 00:21:05.027 iops : min= 160, max= 384, avg=222.00, stdev=42.50, samples=20 00:21:05.027 lat (msec) : 50=14.49%, 100=78.56%, 250=6.95% 00:21:05.027 cpu : usr=42.22%, sys=2.84%, ctx=1308, majf=0, minf=9 00:21:05.027 IO depths : 1=0.1%, 2=1.0%, 4=3.7%, 8=79.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:05.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.027 filename2: (groupid=0, jobs=1): err= 0: pid=83555: Tue Dec 10 21:48:03 2024 00:21:05.027 read: IOPS=225, BW=900KiB/s (922kB/s)(9056KiB/10057msec) 00:21:05.027 slat (usec): min=5, max=8025, avg=23.08, stdev=238.81 00:21:05.027 clat (msec): min=14, max=132, avg=70.81, stdev=21.16 00:21:05.027 lat (msec): min=14, max=132, avg=70.83, stdev=21.16 00:21:05.027 clat percentiles (msec): 00:21:05.027 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 43], 20.00th=[ 51], 00:21:05.027 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:21:05.027 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:21:05.027 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 128], 99.95th=[ 131], 00:21:05.027 | 99.99th=[ 133] 00:21:05.027 bw ( KiB/s): min= 728, max= 1904, per=4.07%, avg=899.20, stdev=246.81, samples=20 00:21:05.027 iops : min= 182, max= 476, avg=224.80, stdev=61.70, samples=20 00:21:05.027 lat (msec) : 20=0.93%, 50=18.07%, 100=74.65%, 250=6.36% 00:21:05.027 cpu : usr=32.08%, sys=2.23%, ctx=982, majf=0, minf=9 00:21:05.027 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.7%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:05.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.027 filename2: (groupid=0, jobs=1): err= 0: pid=83556: Tue Dec 10 21:48:03 2024 00:21:05.027 read: IOPS=229, BW=918KiB/s (940kB/s)(9192KiB/10014msec) 00:21:05.027 slat (usec): min=8, max=8028, avg=28.08, stdev=288.74 00:21:05.027 clat (msec): min=25, max=135, avg=69.58, stdev=18.66 00:21:05.027 lat (msec): min=25, max=135, avg=69.60, stdev=18.67 00:21:05.027 clat percentiles (msec): 00:21:05.027 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:21:05.027 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 75], 00:21:05.027 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 102], 00:21:05.027 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 136], 00:21:05.027 | 99.99th=[ 136] 00:21:05.027 bw ( KiB/s): min= 656, max= 1394, per=4.14%, avg=915.70, stdev=141.06, samples=20 00:21:05.027 iops : min= 164, max= 348, avg=228.90, stdev=35.17, samples=20 00:21:05.027 lat (msec) : 50=20.06%, 100=74.54%, 250=5.40% 00:21:05.027 cpu : usr=35.12%, sys=2.20%, ctx=1066, majf=0, minf=9 00:21:05.027 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:05.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.027 filename2: (groupid=0, jobs=1): err= 0: pid=83557: Tue Dec 10 21:48:03 2024 00:21:05.027 read: IOPS=232, BW=929KiB/s (952kB/s)(9296KiB/10003msec) 00:21:05.027 slat (usec): min=8, max=8029, avg=22.54, stdev=235.06 00:21:05.027 clat (msec): min=5, max=136, avg=68.76, stdev=20.36 00:21:05.027 lat (msec): min=5, max=136, avg=68.78, stdev=20.36 00:21:05.027 clat percentiles (msec): 00:21:05.027 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 50], 00:21:05.027 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 74], 00:21:05.027 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 105], 00:21:05.027 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 136], 00:21:05.027 | 99.99th=[ 136] 00:21:05.027 bw ( KiB/s): min= 640, max= 1282, per=4.10%, avg=907.47, stdev=128.25, samples=19 00:21:05.027 iops : min= 160, max= 320, avg=226.84, stdev=31.98, samples=19 00:21:05.027 lat (msec) : 10=1.51%, 20=0.30%, 50=21.34%, 100=71.43%, 250=5.42% 00:21:05.027 cpu : usr=32.75%, sys=2.29%, ctx=956, majf=0, minf=9 00:21:05.027 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=80.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:05.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 issued rwts: total=2324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.027 filename2: (groupid=0, jobs=1): err= 0: pid=83558: Tue Dec 10 21:48:03 2024 00:21:05.027 read: IOPS=239, BW=957KiB/s (980kB/s)(9592KiB/10023msec) 00:21:05.027 slat (usec): min=8, max=8025, avg=24.60, stdev=216.54 00:21:05.027 clat (msec): min=21, max=123, avg=66.75, stdev=19.21 00:21:05.027 lat (msec): min=21, max=123, avg=66.78, stdev=19.21 00:21:05.027 clat percentiles (msec): 00:21:05.027 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 00:21:05.027 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 68], 60.00th=[ 73], 00:21:05.027 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 90], 95.00th=[ 99], 00:21:05.027 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:21:05.027 | 99.99th=[ 125] 00:21:05.027 bw ( KiB/s): min= 768, max= 1600, per=4.31%, avg=952.80, stdev=169.17, samples=20 00:21:05.027 iops : min= 192, max= 400, avg=238.20, stdev=42.29, samples=20 00:21:05.027 lat (msec) : 50=22.81%, 100=72.81%, 250=4.38% 00:21:05.027 cpu : usr=38.32%, sys=2.77%, ctx=1242, majf=0, minf=9 00:21:05.027 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:05.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.027 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.027 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:05.027 00:21:05.027 Run status group 0 (all jobs): 00:21:05.027 READ: bw=21.6MiB/s (22.6MB/s), 880KiB/s-973KiB/s (901kB/s-997kB/s), io=217MiB (228MB), run=10002-10064msec 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.027 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 bdev_null0 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 [2024-12-10 21:48:03.876392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 bdev_null1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.028 { 00:21:05.028 "params": { 00:21:05.028 "name": "Nvme$subsystem", 00:21:05.028 "trtype": "$TEST_TRANSPORT", 00:21:05.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.028 "adrfam": "ipv4", 00:21:05.028 "trsvcid": "$NVMF_PORT", 00:21:05.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.028 "hdgst": ${hdgst:-false}, 00:21:05.028 "ddgst": ${ddgst:-false} 00:21:05.028 }, 00:21:05.028 "method": "bdev_nvme_attach_controller" 00:21:05.028 } 00:21:05.028 EOF 00:21:05.028 )") 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.028 { 00:21:05.028 "params": { 00:21:05.028 "name": "Nvme$subsystem", 00:21:05.028 "trtype": "$TEST_TRANSPORT", 00:21:05.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.028 "adrfam": "ipv4", 00:21:05.028 "trsvcid": "$NVMF_PORT", 00:21:05.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.028 "hdgst": ${hdgst:-false}, 00:21:05.028 "ddgst": ${ddgst:-false} 00:21:05.028 }, 00:21:05.028 "method": "bdev_nvme_attach_controller" 00:21:05.028 } 00:21:05.028 EOF 00:21:05.028 )") 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:05.028 "params": { 00:21:05.028 "name": "Nvme0", 00:21:05.028 "trtype": "tcp", 00:21:05.028 "traddr": "10.0.0.3", 00:21:05.028 "adrfam": "ipv4", 00:21:05.028 "trsvcid": "4420", 00:21:05.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:05.028 "hdgst": false, 00:21:05.028 "ddgst": false 00:21:05.028 }, 00:21:05.028 "method": "bdev_nvme_attach_controller" 00:21:05.028 },{ 00:21:05.028 "params": { 00:21:05.028 "name": "Nvme1", 00:21:05.028 "trtype": "tcp", 00:21:05.028 "traddr": "10.0.0.3", 00:21:05.028 "adrfam": "ipv4", 00:21:05.028 "trsvcid": "4420", 00:21:05.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.028 "hdgst": false, 00:21:05.028 "ddgst": false 00:21:05.028 }, 00:21:05.028 "method": "bdev_nvme_attach_controller" 00:21:05.028 }' 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:05.028 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:05.029 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:05.029 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:05.029 21:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.029 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:05.029 ... 00:21:05.029 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:05.029 ... 00:21:05.029 fio-3.35 00:21:05.029 Starting 4 threads 00:21:09.225 00:21:09.225 filename0: (groupid=0, jobs=1): err= 0: pid=83696: Tue Dec 10 21:48:09 2024 00:21:09.225 read: IOPS=2459, BW=19.2MiB/s (20.1MB/s)(96.1MiB/5002msec) 00:21:09.225 slat (nsec): min=7886, max=36077, avg=10610.16, stdev=2848.16 00:21:09.225 clat (usec): min=651, max=7277, avg=3226.58, stdev=1146.55 00:21:09.225 lat (usec): min=660, max=7293, avg=3237.19, stdev=1146.31 00:21:09.225 clat percentiles (usec): 00:21:09.225 | 1.00th=[ 1434], 5.00th=[ 1450], 10.00th=[ 1450], 20.00th=[ 1483], 00:21:09.225 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3425], 60.00th=[ 4015], 00:21:09.225 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:21:09.225 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5473], 99.95th=[ 6390], 00:21:09.225 | 99.99th=[ 6849] 00:21:09.225 bw ( KiB/s): min=19152, max=20119, per=31.38%, avg=19942.11, stdev=302.81, samples=9 00:21:09.225 iops : min= 2394, max= 2514, avg=2492.67, stdev=37.79, samples=9 00:21:09.225 lat (usec) : 750=0.05%, 1000=0.15% 00:21:09.225 lat (msec) : 2=25.58%, 4=31.07%, 10=43.16% 00:21:09.225 cpu : usr=90.00%, sys=8.98%, ctx=6, majf=0, minf=0 00:21:09.225 IO depths : 1=0.1%, 2=0.2%, 4=63.9%, 8=35.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 issued rwts: total=12300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.225 filename0: (groupid=0, jobs=1): err= 0: pid=83697: Tue Dec 10 21:48:09 2024 00:21:09.225 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:21:09.225 slat (nsec): min=4504, max=48817, avg=16711.12, stdev=3450.08 00:21:09.225 clat (usec): min=1635, max=8604, avg=4310.92, stdev=322.91 00:21:09.225 lat (usec): min=1650, max=8620, avg=4327.64, stdev=322.84 00:21:09.225 clat percentiles (usec): 00:21:09.225 | 1.00th=[ 3392], 5.00th=[ 3949], 10.00th=[ 4228], 20.00th=[ 4293], 00:21:09.225 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:21:09.225 | 70.00th=[ 4359], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4424], 00:21:09.225 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 6325], 99.95th=[ 6718], 00:21:09.225 | 99.99th=[ 8586] 00:21:09.225 bw ( KiB/s): min=14352, max=14592, per=22.87%, avg=14533.56, stdev=86.98, samples=9 00:21:09.225 iops : min= 1794, max= 1824, avg=1816.67, stdev=10.86, samples=9 00:21:09.225 lat (msec) : 2=0.23%, 4=5.55%, 10=94.22% 00:21:09.225 cpu : usr=91.72%, sys=7.44%, ctx=15, majf=0, minf=0 00:21:09.225 IO depths : 1=0.1%, 2=23.1%, 4=51.5%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 issued rwts: total=9146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.225 filename1: (groupid=0, jobs=1): err= 0: pid=83698: Tue Dec 10 21:48:09 2024 00:21:09.225 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:21:09.225 slat (usec): min=8, max=299, avg=16.95, stdev= 5.73 00:21:09.225 clat (usec): min=1638, max=8613, avg=4308.77, stdev=323.95 00:21:09.225 lat (usec): min=1653, max=8629, avg=4325.72, stdev=323.86 00:21:09.225 clat percentiles (usec): 00:21:09.225 | 1.00th=[ 3392], 5.00th=[ 3949], 10.00th=[ 4228], 20.00th=[ 4293], 00:21:09.225 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:21:09.225 | 70.00th=[ 4359], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4424], 00:21:09.225 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 6325], 99.95th=[ 6718], 00:21:09.225 | 99.99th=[ 8586] 00:21:09.225 bw ( KiB/s): min=14352, max=14592, per=22.87%, avg=14536.67, stdev=84.65, samples=9 00:21:09.225 iops : min= 1794, max= 1824, avg=1817.00, stdev=10.61, samples=9 00:21:09.225 lat (msec) : 2=0.23%, 4=5.71%, 10=94.06% 00:21:09.225 cpu : usr=90.78%, sys=8.08%, ctx=61, majf=0, minf=0 00:21:09.225 IO depths : 1=0.1%, 2=23.1%, 4=51.5%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 issued rwts: total=9146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.225 filename1: (groupid=0, jobs=1): err= 0: pid=83699: Tue Dec 10 21:48:09 2024 00:21:09.225 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5002msec) 00:21:09.225 slat (nsec): min=5417, max=57788, avg=16174.84, stdev=3864.44 00:21:09.225 clat (usec): min=1593, max=8600, avg=4313.16, stdev=324.76 00:21:09.225 lat (usec): min=1614, max=8613, avg=4329.33, stdev=324.59 00:21:09.225 clat percentiles (usec): 00:21:09.225 | 1.00th=[ 3392], 5.00th=[ 3949], 10.00th=[ 4228], 20.00th=[ 4293], 00:21:09.225 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:21:09.225 | 70.00th=[ 4359], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4424], 00:21:09.225 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 6325], 99.95th=[ 6783], 00:21:09.225 | 99.99th=[ 8586] 00:21:09.225 bw ( KiB/s): min=14352, max=14592, per=22.87%, avg=14533.56, stdev=86.98, samples=9 00:21:09.225 iops : min= 1794, max= 1824, avg=1816.67, stdev=10.86, samples=9 00:21:09.225 lat (msec) : 2=0.23%, 4=5.53%, 10=94.24% 00:21:09.225 cpu : usr=91.04%, sys=8.00%, ctx=1209, majf=0, minf=1 00:21:09.225 IO depths : 1=0.1%, 2=23.1%, 4=51.5%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:09.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.225 issued rwts: total=9146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:09.225 00:21:09.225 Run status group 0 (all jobs): 00:21:09.225 READ: bw=62.1MiB/s (65.1MB/s), 14.3MiB/s-19.2MiB/s (15.0MB/s-20.1MB/s), io=310MiB (326MB), run=5001-5002msec 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:09.225 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 ************************************ 00:21:09.226 END TEST fio_dif_rand_params 00:21:09.226 ************************************ 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.226 00:21:09.226 real 0m23.295s 00:21:09.226 user 2m1.694s 00:21:09.226 sys 0m9.407s 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 21:48:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:09.226 21:48:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:09.226 21:48:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 ************************************ 00:21:09.226 START TEST fio_dif_digest 00:21:09.226 ************************************ 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 bdev_null0 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:09.226 [2024-12-10 21:48:09.952737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.226 { 00:21:09.226 "params": { 00:21:09.226 "name": "Nvme$subsystem", 00:21:09.226 "trtype": "$TEST_TRANSPORT", 00:21:09.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.226 "adrfam": "ipv4", 00:21:09.226 "trsvcid": "$NVMF_PORT", 00:21:09.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.226 "hdgst": ${hdgst:-false}, 00:21:09.226 "ddgst": ${ddgst:-false} 00:21:09.226 }, 00:21:09.226 "method": "bdev_nvme_attach_controller" 00:21:09.226 } 00:21:09.226 EOF 00:21:09.226 )") 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:09.226 "params": { 00:21:09.226 "name": "Nvme0", 00:21:09.226 "trtype": "tcp", 00:21:09.226 "traddr": "10.0.0.3", 00:21:09.226 "adrfam": "ipv4", 00:21:09.226 "trsvcid": "4420", 00:21:09.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:09.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:09.226 "hdgst": true, 00:21:09.226 "ddgst": true 00:21:09.226 }, 00:21:09.226 "method": "bdev_nvme_attach_controller" 00:21:09.226 }' 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.226 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:09.227 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:09.227 21:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:09.485 21:48:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:09.485 21:48:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:09.485 21:48:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:09.485 21:48:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:09.485 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:09.485 ... 00:21:09.485 fio-3.35 00:21:09.485 Starting 3 threads 00:21:21.687 00:21:21.687 filename0: (groupid=0, jobs=1): err= 0: pid=83804: Tue Dec 10 21:48:20 2024 00:21:21.687 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10007msec) 00:21:21.687 slat (nsec): min=8348, max=90521, avg=18203.03, stdev=6168.08 00:21:21.687 clat (usec): min=9512, max=18246, avg=14072.02, stdev=589.60 00:21:21.687 lat (usec): min=9544, max=18263, avg=14090.22, stdev=589.94 00:21:21.687 clat percentiles (usec): 00:21:21.687 | 1.00th=[13829], 5.00th=[13829], 10.00th=[13829], 20.00th=[13829], 00:21:21.687 | 30.00th=[13829], 40.00th=[13829], 50.00th=[13960], 60.00th=[13960], 00:21:21.687 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[15008], 00:21:21.687 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:21:21.687 | 99.99th=[18220] 00:21:21.687 bw ( KiB/s): min=25344, max=27648, per=33.33%, avg=27203.37, stdev=643.36, samples=19 00:21:21.687 iops : min= 198, max= 216, avg=212.53, stdev= 5.03, samples=19 00:21:21.687 lat (msec) : 10=0.14%, 20=99.86% 00:21:21.687 cpu : usr=90.78%, sys=8.63%, ctx=9, majf=0, minf=0 00:21:21.687 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:21.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.687 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:21.687 filename0: (groupid=0, jobs=1): err= 0: pid=83805: Tue Dec 10 21:48:20 2024 00:21:21.687 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10008msec) 00:21:21.687 slat (nsec): min=8448, max=65510, avg=18239.11, stdev=5678.52 00:21:21.687 clat (usec): min=9553, max=18230, avg=14072.67, stdev=589.54 00:21:21.687 lat (usec): min=9585, max=18259, avg=14090.91, stdev=590.05 00:21:21.687 clat percentiles (usec): 00:21:21.687 | 1.00th=[13829], 5.00th=[13829], 10.00th=[13829], 20.00th=[13829], 00:21:21.687 | 30.00th=[13829], 40.00th=[13960], 50.00th=[13960], 60.00th=[13960], 00:21:21.687 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[14877], 00:21:21.687 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:21:21.687 | 99.99th=[18220] 00:21:21.687 bw ( KiB/s): min=25344, max=27648, per=33.33%, avg=27203.37, stdev=643.36, samples=19 00:21:21.687 iops : min= 198, max= 216, avg=212.53, stdev= 5.03, samples=19 00:21:21.687 lat (msec) : 10=0.14%, 20=99.86% 00:21:21.687 cpu : usr=91.27%, sys=8.14%, ctx=11, majf=0, minf=0 00:21:21.687 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:21.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.687 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:21.687 filename0: (groupid=0, jobs=1): err= 0: pid=83806: Tue Dec 10 21:48:20 2024 00:21:21.687 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10006msec) 00:21:21.687 slat (usec): min=6, max=193, avg=17.32, stdev= 6.80 00:21:21.687 clat (usec): min=8707, max=18270, avg=14072.86, stdev=599.57 00:21:21.687 lat (usec): min=8713, max=18286, avg=14090.18, stdev=599.71 00:21:21.687 clat percentiles (usec): 00:21:21.687 | 1.00th=[13698], 5.00th=[13829], 10.00th=[13829], 20.00th=[13829], 00:21:21.687 | 30.00th=[13829], 40.00th=[13960], 50.00th=[13960], 60.00th=[13960], 00:21:21.687 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[15008], 00:21:21.687 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:21:21.687 | 99.99th=[18220] 00:21:21.687 bw ( KiB/s): min=24576, max=27648, per=33.33%, avg=27203.37, stdev=738.23, samples=19 00:21:21.687 iops : min= 192, max= 216, avg=212.53, stdev= 5.77, samples=19 00:21:21.687 lat (msec) : 10=0.14%, 20=99.86% 00:21:21.687 cpu : usr=90.42%, sys=8.71%, ctx=90, majf=0, minf=0 00:21:21.687 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:21.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.687 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.687 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:21.687 00:21:21.687 Run status group 0 (all jobs): 00:21:21.687 READ: bw=79.7MiB/s (83.6MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=798MiB (836MB), run=10006-10008msec 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.687 00:21:21.687 real 0m10.931s 00:21:21.687 user 0m27.879s 00:21:21.687 sys 0m2.771s 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.687 21:48:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 ************************************ 00:21:21.687 END TEST fio_dif_digest 00:21:21.687 ************************************ 00:21:21.687 21:48:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:21.687 21:48:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:21.687 21:48:20 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.687 21:48:20 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.688 rmmod nvme_tcp 00:21:21.688 rmmod nvme_fabrics 00:21:21.688 rmmod nvme_keyring 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83068 ']' 00:21:21.688 21:48:20 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83068 00:21:21.688 21:48:20 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83068 ']' 00:21:21.688 21:48:20 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83068 00:21:21.688 21:48:20 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83068 00:21:21.688 killing process with pid 83068 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83068' 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83068 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83068 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:21.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:21.688 Waiting for block devices as requested 00:21:21.688 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:21.688 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.688 21:48:21 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:21.688 00:21:21.688 real 0m58.881s 00:21:21.688 user 3m45.445s 00:21:21.688 sys 0m20.266s 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.688 21:48:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:21.688 ************************************ 00:21:21.688 END TEST nvmf_dif 00:21:21.688 ************************************ 00:21:21.688 21:48:22 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:21.688 21:48:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:21.688 21:48:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.688 21:48:22 -- common/autotest_common.sh@10 -- # set +x 00:21:21.688 ************************************ 00:21:21.688 START TEST nvmf_abort_qd_sizes 00:21:21.688 ************************************ 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:21.688 * Looking for test storage... 00:21:21.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.688 --rc genhtml_branch_coverage=1 00:21:21.688 --rc genhtml_function_coverage=1 00:21:21.688 --rc genhtml_legend=1 00:21:21.688 --rc geninfo_all_blocks=1 00:21:21.688 --rc geninfo_unexecuted_blocks=1 00:21:21.688 00:21:21.688 ' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.688 --rc genhtml_branch_coverage=1 00:21:21.688 --rc genhtml_function_coverage=1 00:21:21.688 --rc genhtml_legend=1 00:21:21.688 --rc geninfo_all_blocks=1 00:21:21.688 --rc geninfo_unexecuted_blocks=1 00:21:21.688 00:21:21.688 ' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.688 --rc genhtml_branch_coverage=1 00:21:21.688 --rc genhtml_function_coverage=1 00:21:21.688 --rc genhtml_legend=1 00:21:21.688 --rc geninfo_all_blocks=1 00:21:21.688 --rc geninfo_unexecuted_blocks=1 00:21:21.688 00:21:21.688 ' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.688 --rc genhtml_branch_coverage=1 00:21:21.688 --rc genhtml_function_coverage=1 00:21:21.688 --rc genhtml_legend=1 00:21:21.688 --rc geninfo_all_blocks=1 00:21:21.688 --rc geninfo_unexecuted_blocks=1 00:21:21.688 00:21:21.688 ' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:21.688 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.689 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:21.689 Cannot find device "nvmf_init_br" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:21.689 Cannot find device "nvmf_init_br2" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:21.689 Cannot find device "nvmf_tgt_br" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.689 Cannot find device "nvmf_tgt_br2" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:21.689 Cannot find device "nvmf_init_br" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:21.689 Cannot find device "nvmf_init_br2" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:21.689 Cannot find device "nvmf_tgt_br" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:21.689 Cannot find device "nvmf_tgt_br2" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:21.689 Cannot find device "nvmf_br" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:21.689 Cannot find device "nvmf_init_if" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:21.689 Cannot find device "nvmf_init_if2" 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:21.689 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:21.948 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:21.948 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:21:21.948 00:21:21.948 --- 10.0.0.3 ping statistics --- 00:21:21.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.948 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:21.948 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:21.948 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:21:21.948 00:21:21.948 --- 10.0.0.4 ping statistics --- 00:21:21.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.948 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:21.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:21.948 00:21:21.948 --- 10.0.0.1 ping statistics --- 00:21:21.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.948 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:21.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:21:21.948 00:21:21.948 --- 10.0.0.2 ping statistics --- 00:21:21.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.948 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:21.948 21:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:22.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:22.774 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:22.774 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84455 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84455 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84455 ']' 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.774 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:22.774 [2024-12-10 21:48:23.553638] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:21:22.775 [2024-12-10 21:48:23.553738] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.033 [2024-12-10 21:48:23.698599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.033 [2024-12-10 21:48:23.734484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.033 [2024-12-10 21:48:23.734541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.033 [2024-12-10 21:48:23.734553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.033 [2024-12-10 21:48:23.734562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.033 [2024-12-10 21:48:23.734569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.033 [2024-12-10 21:48:23.735382] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.033 [2024-12-10 21:48:23.735486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.033 [2024-12-10 21:48:23.735553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.033 [2024-12-10 21:48:23.735556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.033 [2024-12-10 21:48:23.767203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:23.291 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:23.292 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:23.292 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:23.292 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:23.292 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:23.292 21:48:23 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.292 21:48:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:23.292 ************************************ 00:21:23.292 START TEST spdk_target_abort 00:21:23.292 ************************************ 00:21:23.292 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:21:23.292 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:23.292 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:23.292 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.292 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:23.550 spdk_targetn1 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:23.550 [2024-12-10 21:48:24.098197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:23.550 [2024-12-10 21:48:24.134046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:23.550 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:23.551 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:23.551 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:23.551 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:23.551 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:23.551 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:23.551 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:23.551 21:48:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:26.899 Initializing NVMe Controllers 00:21:26.899 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:26.899 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:26.899 Initialization complete. Launching workers. 00:21:26.899 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11420, failed: 0 00:21:26.899 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1010, failed to submit 10410 00:21:26.899 success 794, unsuccessful 216, failed 0 00:21:26.899 21:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:26.899 21:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:30.185 Initializing NVMe Controllers 00:21:30.185 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:30.185 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:30.185 Initialization complete. Launching workers. 00:21:30.185 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8780, failed: 0 00:21:30.185 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1147, failed to submit 7633 00:21:30.185 success 386, unsuccessful 761, failed 0 00:21:30.185 21:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:30.185 21:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:33.467 Initializing NVMe Controllers 00:21:33.467 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:33.467 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:33.467 Initialization complete. Launching workers. 00:21:33.467 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30458, failed: 0 00:21:33.467 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2276, failed to submit 28182 00:21:33.467 success 441, unsuccessful 1835, failed 0 00:21:33.467 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:33.467 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.467 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:33.467 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.467 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:33.467 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.467 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84455 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84455 ']' 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84455 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84455 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.033 killing process with pid 84455 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84455' 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84455 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84455 00:21:34.033 00:21:34.033 real 0m10.745s 00:21:34.033 user 0m41.481s 00:21:34.033 sys 0m2.155s 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:34.033 ************************************ 00:21:34.033 END TEST spdk_target_abort 00:21:34.033 ************************************ 00:21:34.033 21:48:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:34.033 21:48:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:34.033 21:48:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.033 21:48:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:34.033 ************************************ 00:21:34.033 START TEST kernel_target_abort 00:21:34.033 ************************************ 00:21:34.033 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:34.291 21:48:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:34.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:34.549 Waiting for block devices as requested 00:21:34.549 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:34.807 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:34.807 No valid GPT data, bailing 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:34.807 No valid GPT data, bailing 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:34.807 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:35.066 No valid GPT data, bailing 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:35.066 No valid GPT data, bailing 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:35.066 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c --hostid=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c -a 10.0.0.1 -t tcp -s 4420 00:21:35.067 00:21:35.067 Discovery Log Number of Records 2, Generation counter 2 00:21:35.067 =====Discovery Log Entry 0====== 00:21:35.067 trtype: tcp 00:21:35.067 adrfam: ipv4 00:21:35.067 subtype: current discovery subsystem 00:21:35.067 treq: not specified, sq flow control disable supported 00:21:35.067 portid: 1 00:21:35.067 trsvcid: 4420 00:21:35.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:35.067 traddr: 10.0.0.1 00:21:35.067 eflags: none 00:21:35.067 sectype: none 00:21:35.067 =====Discovery Log Entry 1====== 00:21:35.067 trtype: tcp 00:21:35.067 adrfam: ipv4 00:21:35.067 subtype: nvme subsystem 00:21:35.067 treq: not specified, sq flow control disable supported 00:21:35.067 portid: 1 00:21:35.067 trsvcid: 4420 00:21:35.067 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:35.067 traddr: 10.0.0.1 00:21:35.067 eflags: none 00:21:35.067 sectype: none 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:35.067 21:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:38.375 Initializing NVMe Controllers 00:21:38.375 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:38.375 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:38.375 Initialization complete. Launching workers. 00:21:38.375 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33861, failed: 0 00:21:38.375 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33861, failed to submit 0 00:21:38.375 success 0, unsuccessful 33861, failed 0 00:21:38.375 21:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:38.375 21:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:41.656 Initializing NVMe Controllers 00:21:41.656 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:41.656 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:41.656 Initialization complete. Launching workers. 00:21:41.656 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66558, failed: 0 00:21:41.656 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29168, failed to submit 37390 00:21:41.656 success 0, unsuccessful 29168, failed 0 00:21:41.656 21:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:41.656 21:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:44.937 Initializing NVMe Controllers 00:21:44.937 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:44.937 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:44.937 Initialization complete. Launching workers. 00:21:44.937 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75960, failed: 0 00:21:44.937 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18954, failed to submit 57006 00:21:44.937 success 0, unsuccessful 18954, failed 0 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:44.937 21:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:45.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:47.095 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:47.095 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:47.353 00:21:47.353 real 0m13.067s 00:21:47.353 user 0m6.511s 00:21:47.353 sys 0m4.025s 00:21:47.353 ************************************ 00:21:47.353 END TEST kernel_target_abort 00:21:47.353 ************************************ 00:21:47.353 21:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.353 21:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.353 rmmod nvme_tcp 00:21:47.353 rmmod nvme_fabrics 00:21:47.353 rmmod nvme_keyring 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84455 ']' 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84455 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84455 ']' 00:21:47.353 21:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84455 00:21:47.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84455) - No such process 00:21:47.353 Process with pid 84455 is not found 00:21:47.353 21:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84455 is not found' 00:21:47.354 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:47.354 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:47.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:47.613 Waiting for block devices as requested 00:21:47.613 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:47.875 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:47.875 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:48.134 00:21:48.134 real 0m26.775s 00:21:48.134 user 0m49.178s 00:21:48.134 sys 0m7.523s 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.134 21:48:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:48.134 ************************************ 00:21:48.134 END TEST nvmf_abort_qd_sizes 00:21:48.134 ************************************ 00:21:48.134 21:48:48 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:48.134 21:48:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:48.134 21:48:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.134 21:48:48 -- common/autotest_common.sh@10 -- # set +x 00:21:48.134 ************************************ 00:21:48.134 START TEST keyring_file 00:21:48.134 ************************************ 00:21:48.134 21:48:48 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:48.134 * Looking for test storage... 00:21:48.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:48.134 21:48:48 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:48.392 21:48:48 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:48.392 21:48:48 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:21:48.392 21:48:49 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:48.392 21:48:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.393 --rc genhtml_branch_coverage=1 00:21:48.393 --rc genhtml_function_coverage=1 00:21:48.393 --rc genhtml_legend=1 00:21:48.393 --rc geninfo_all_blocks=1 00:21:48.393 --rc geninfo_unexecuted_blocks=1 00:21:48.393 00:21:48.393 ' 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.393 --rc genhtml_branch_coverage=1 00:21:48.393 --rc genhtml_function_coverage=1 00:21:48.393 --rc genhtml_legend=1 00:21:48.393 --rc geninfo_all_blocks=1 00:21:48.393 --rc geninfo_unexecuted_blocks=1 00:21:48.393 00:21:48.393 ' 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.393 --rc genhtml_branch_coverage=1 00:21:48.393 --rc genhtml_function_coverage=1 00:21:48.393 --rc genhtml_legend=1 00:21:48.393 --rc geninfo_all_blocks=1 00:21:48.393 --rc geninfo_unexecuted_blocks=1 00:21:48.393 00:21:48.393 ' 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.393 --rc genhtml_branch_coverage=1 00:21:48.393 --rc genhtml_function_coverage=1 00:21:48.393 --rc genhtml_legend=1 00:21:48.393 --rc geninfo_all_blocks=1 00:21:48.393 --rc geninfo_unexecuted_blocks=1 00:21:48.393 00:21:48.393 ' 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.393 21:48:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.393 21:48:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.393 21:48:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.393 21:48:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.393 21:48:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:48.393 21:48:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.393 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.scW7x0JFNh 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.scW7x0JFNh 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.scW7x0JFNh 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.scW7x0JFNh 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9P0rrb0b7B 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:48.393 21:48:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9P0rrb0b7B 00:21:48.393 21:48:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9P0rrb0b7B 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9P0rrb0b7B 00:21:48.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=85361 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:48.393 21:48:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85361 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85361 ']' 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.393 21:48:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:48.652 [2024-12-10 21:48:49.215258] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:21:48.652 [2024-12-10 21:48:49.215359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85361 ] 00:21:48.652 [2024-12-10 21:48:49.356117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.652 [2024-12-10 21:48:49.388947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.652 [2024-12-10 21:48:49.428250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:48.910 21:48:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:48.910 [2024-12-10 21:48:49.563082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.910 null0 00:21:48.910 [2024-12-10 21:48:49.595066] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.910 [2024-12-10 21:48:49.595276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.910 21:48:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:48.910 [2024-12-10 21:48:49.623043] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:48.910 request: 00:21:48.910 { 00:21:48.910 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:48.910 "secure_channel": false, 00:21:48.910 "listen_address": { 00:21:48.910 "trtype": "tcp", 00:21:48.910 "traddr": "127.0.0.1", 00:21:48.910 "trsvcid": "4420" 00:21:48.910 }, 00:21:48.910 "method": "nvmf_subsystem_add_listener", 00:21:48.910 "req_id": 1 00:21:48.910 } 00:21:48.910 Got JSON-RPC error response 00:21:48.910 response: 00:21:48.910 { 00:21:48.910 "code": -32602, 00:21:48.910 "message": "Invalid parameters" 00:21:48.910 } 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.910 21:48:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=85366 00:21:48.910 21:48:49 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:48.910 21:48:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85366 /var/tmp/bperf.sock 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85366 ']' 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:48.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.910 21:48:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:48.910 [2024-12-10 21:48:49.687762] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:21:48.910 [2024-12-10 21:48:49.687855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85366 ] 00:21:49.168 [2024-12-10 21:48:49.896261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.168 [2024-12-10 21:48:49.930013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.426 [2024-12-10 21:48:49.960345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:49.426 21:48:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.426 21:48:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:49.426 21:48:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:49.426 21:48:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:49.684 21:48:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9P0rrb0b7B 00:21:49.684 21:48:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9P0rrb0b7B 00:21:49.941 21:48:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:49.942 21:48:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:49.942 21:48:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.942 21:48:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:49.942 21:48:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.200 21:48:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.scW7x0JFNh == \/\t\m\p\/\t\m\p\.\s\c\W\7\x\0\J\F\N\h ]] 00:21:50.200 21:48:50 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:50.200 21:48:50 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:50.200 21:48:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.200 21:48:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:50.200 21:48:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.767 21:48:51 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.9P0rrb0b7B == \/\t\m\p\/\t\m\p\.\9\P\0\r\r\b\0\b\7\B ]] 00:21:50.767 21:48:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:50.767 21:48:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:50.767 21:48:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.767 21:48:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.767 21:48:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.767 21:48:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.025 21:48:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:51.025 21:48:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:51.025 21:48:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:51.025 21:48:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.025 21:48:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:51.025 21:48:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.025 21:48:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.283 21:48:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:51.283 21:48:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:51.283 21:48:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:51.541 [2024-12-10 21:48:52.220772] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.541 nvme0n1 00:21:51.541 21:48:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:51.541 21:48:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:51.541 21:48:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.541 21:48:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.541 21:48:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.541 21:48:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.107 21:48:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:52.107 21:48:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:52.107 21:48:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:52.107 21:48:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:52.107 21:48:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:52.107 21:48:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.107 21:48:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:52.365 21:48:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:52.365 21:48:52 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:52.365 Running I/O for 1 seconds... 00:21:53.738 11374.00 IOPS, 44.43 MiB/s 00:21:53.738 Latency(us) 00:21:53.738 [2024-12-10T21:48:54.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.738 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:53.738 nvme0n1 : 1.01 11417.60 44.60 0.00 0.00 11180.73 4944.99 20018.27 00:21:53.738 [2024-12-10T21:48:54.521Z] =================================================================================================================== 00:21:53.738 [2024-12-10T21:48:54.521Z] Total : 11417.60 44.60 0.00 0.00 11180.73 4944.99 20018.27 00:21:53.738 { 00:21:53.738 "results": [ 00:21:53.738 { 00:21:53.738 "job": "nvme0n1", 00:21:53.738 "core_mask": "0x2", 00:21:53.739 "workload": "randrw", 00:21:53.739 "percentage": 50, 00:21:53.739 "status": "finished", 00:21:53.739 "queue_depth": 128, 00:21:53.739 "io_size": 4096, 00:21:53.739 "runtime": 1.00748, 00:21:53.739 "iops": 11417.596379084449, 00:21:53.739 "mibps": 44.59998585579863, 00:21:53.739 "io_failed": 0, 00:21:53.739 "io_timeout": 0, 00:21:53.739 "avg_latency_us": 11180.729158085243, 00:21:53.739 "min_latency_us": 4944.989090909091, 00:21:53.739 "max_latency_us": 20018.269090909092 00:21:53.739 } 00:21:53.739 ], 00:21:53.739 "core_count": 1 00:21:53.739 } 00:21:53.739 21:48:54 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:53.739 21:48:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:53.739 21:48:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:53.739 21:48:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:53.739 21:48:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:53.739 21:48:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:53.739 21:48:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.739 21:48:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.305 21:48:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:54.305 21:48:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:54.305 21:48:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:54.305 21:48:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.305 21:48:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:54.305 21:48:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.305 21:48:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.563 21:48:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:54.563 21:48:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:54.563 21:48:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:54.563 21:48:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:54.563 21:48:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:54.563 21:48:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.563 21:48:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:54.563 21:48:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.563 21:48:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:54.563 21:48:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:54.822 [2024-12-10 21:48:55.380348] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-12-10 21:48:55.380369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07ce0 (107)k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:54.822 : Transport endpoint is not connected 00:21:54.822 [2024-12-10 21:48:55.381357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a07ce0 (9): Bad file descriptor 00:21:54.822 [2024-12-10 21:48:55.382354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:54.822 [2024-12-10 21:48:55.382383] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:54.822 [2024-12-10 21:48:55.382395] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:54.822 [2024-12-10 21:48:55.382406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:54.822 request: 00:21:54.822 { 00:21:54.822 "name": "nvme0", 00:21:54.822 "trtype": "tcp", 00:21:54.822 "traddr": "127.0.0.1", 00:21:54.822 "adrfam": "ipv4", 00:21:54.822 "trsvcid": "4420", 00:21:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:54.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:54.822 "prchk_reftag": false, 00:21:54.822 "prchk_guard": false, 00:21:54.822 "hdgst": false, 00:21:54.822 "ddgst": false, 00:21:54.822 "psk": "key1", 00:21:54.822 "allow_unrecognized_csi": false, 00:21:54.822 "method": "bdev_nvme_attach_controller", 00:21:54.822 "req_id": 1 00:21:54.822 } 00:21:54.822 Got JSON-RPC error response 00:21:54.822 response: 00:21:54.822 { 00:21:54.822 "code": -5, 00:21:54.822 "message": "Input/output error" 00:21:54.822 } 00:21:54.822 21:48:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:54.822 21:48:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.822 21:48:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.822 21:48:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.822 21:48:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:54.822 21:48:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:54.822 21:48:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.822 21:48:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.822 21:48:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.822 21:48:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.080 21:48:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:55.080 21:48:55 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:55.080 21:48:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:55.080 21:48:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.080 21:48:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.080 21:48:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.080 21:48:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:55.338 21:48:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:55.339 21:48:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:55.339 21:48:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:55.905 21:48:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:55.905 21:48:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:56.163 21:48:56 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:56.163 21:48:56 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:56.163 21:48:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.421 21:48:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:56.421 21:48:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.scW7x0JFNh 00:21:56.421 21:48:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:56.421 21:48:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:56.421 21:48:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:56.421 21:48:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:56.421 21:48:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.421 21:48:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:56.422 21:48:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.422 21:48:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:56.422 21:48:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:56.680 [2024-12-10 21:48:57.312616] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.scW7x0JFNh': 0100660 00:21:56.680 [2024-12-10 21:48:57.312673] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:56.680 request: 00:21:56.680 { 00:21:56.680 "name": "key0", 00:21:56.680 "path": "/tmp/tmp.scW7x0JFNh", 00:21:56.680 "method": "keyring_file_add_key", 00:21:56.680 "req_id": 1 00:21:56.680 } 00:21:56.680 Got JSON-RPC error response 00:21:56.680 response: 00:21:56.680 { 00:21:56.680 "code": -1, 00:21:56.680 "message": "Operation not permitted" 00:21:56.680 } 00:21:56.680 21:48:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:56.680 21:48:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:56.680 21:48:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:56.680 21:48:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:56.680 21:48:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.scW7x0JFNh 00:21:56.680 21:48:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:56.680 21:48:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.scW7x0JFNh 00:21:56.938 21:48:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.scW7x0JFNh 00:21:56.938 21:48:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:56.938 21:48:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:56.938 21:48:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:56.938 21:48:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:56.938 21:48:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:56.938 21:48:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:57.197 21:48:57 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:57.197 21:48:57 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:57.197 21:48:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:57.197 21:48:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:57.197 21:48:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:57.197 21:48:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.197 21:48:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:57.197 21:48:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.197 21:48:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:57.197 21:48:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:57.455 [2024-12-10 21:48:58.192803] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.scW7x0JFNh': No such file or directory 00:21:57.455 [2024-12-10 21:48:58.192855] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:57.455 [2024-12-10 21:48:58.192878] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:57.455 [2024-12-10 21:48:58.192888] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:57.455 [2024-12-10 21:48:58.192898] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:57.455 [2024-12-10 21:48:58.192907] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:57.455 request: 00:21:57.455 { 00:21:57.455 "name": "nvme0", 00:21:57.455 "trtype": "tcp", 00:21:57.455 "traddr": "127.0.0.1", 00:21:57.455 "adrfam": "ipv4", 00:21:57.455 "trsvcid": "4420", 00:21:57.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.455 "prchk_reftag": false, 00:21:57.455 "prchk_guard": false, 00:21:57.455 "hdgst": false, 00:21:57.455 "ddgst": false, 00:21:57.455 "psk": "key0", 00:21:57.455 "allow_unrecognized_csi": false, 00:21:57.455 "method": "bdev_nvme_attach_controller", 00:21:57.455 "req_id": 1 00:21:57.455 } 00:21:57.455 Got JSON-RPC error response 00:21:57.455 response: 00:21:57.455 { 00:21:57.455 "code": -19, 00:21:57.455 "message": "No such device" 00:21:57.455 } 00:21:57.455 21:48:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:57.455 21:48:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.455 21:48:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.455 21:48:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.455 21:48:58 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:57.455 21:48:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:58.021 21:48:58 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sUOydWKXkL 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:58.021 21:48:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:58.021 21:48:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:58.021 21:48:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:58.021 21:48:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:58.021 21:48:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:58.021 21:48:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sUOydWKXkL 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sUOydWKXkL 00:21:58.021 21:48:58 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.sUOydWKXkL 00:21:58.021 21:48:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sUOydWKXkL 00:21:58.021 21:48:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sUOydWKXkL 00:21:58.279 21:48:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:58.279 21:48:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:58.537 nvme0n1 00:21:58.537 21:48:59 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:58.537 21:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:58.537 21:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.537 21:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.537 21:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.537 21:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.811 21:48:59 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:58.811 21:48:59 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:58.811 21:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:59.069 21:48:59 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:59.069 21:48:59 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:59.069 21:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.069 21:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.069 21:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.636 21:49:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:59.636 21:49:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:59.636 21:49:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:59.636 21:49:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:59.636 21:49:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.636 21:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.636 21:49:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:59.895 21:49:00 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:59.895 21:49:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:59.895 21:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:00.153 21:49:00 keyring_file -- keyring/file.sh@105 -- # jq length 00:22:00.153 21:49:00 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:22:00.153 21:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:00.411 21:49:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:22:00.411 21:49:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sUOydWKXkL 00:22:00.411 21:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sUOydWKXkL 00:22:00.669 21:49:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9P0rrb0b7B 00:22:00.669 21:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9P0rrb0b7B 00:22:00.927 21:49:01 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:00.927 21:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:01.185 nvme0n1 00:22:01.185 21:49:01 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:22:01.185 21:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:01.752 21:49:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:22:01.752 "subsystems": [ 00:22:01.752 { 00:22:01.752 "subsystem": "keyring", 00:22:01.752 "config": [ 00:22:01.752 { 00:22:01.752 "method": "keyring_file_add_key", 00:22:01.752 "params": { 00:22:01.752 "name": "key0", 00:22:01.752 "path": "/tmp/tmp.sUOydWKXkL" 00:22:01.752 } 00:22:01.752 }, 00:22:01.752 { 00:22:01.752 "method": "keyring_file_add_key", 00:22:01.752 "params": { 00:22:01.752 "name": "key1", 00:22:01.752 "path": "/tmp/tmp.9P0rrb0b7B" 00:22:01.752 } 00:22:01.752 } 00:22:01.752 ] 00:22:01.752 }, 00:22:01.752 { 00:22:01.752 "subsystem": "iobuf", 00:22:01.752 "config": [ 00:22:01.752 { 00:22:01.752 "method": "iobuf_set_options", 00:22:01.752 "params": { 00:22:01.752 "small_pool_count": 8192, 00:22:01.752 "large_pool_count": 1024, 00:22:01.752 "small_bufsize": 8192, 00:22:01.752 "large_bufsize": 135168, 00:22:01.752 "enable_numa": false 00:22:01.752 } 00:22:01.752 } 00:22:01.752 ] 00:22:01.752 }, 00:22:01.752 { 00:22:01.752 "subsystem": "sock", 00:22:01.752 "config": [ 00:22:01.752 { 00:22:01.752 "method": "sock_set_default_impl", 00:22:01.752 "params": { 00:22:01.752 "impl_name": "uring" 00:22:01.752 } 00:22:01.752 }, 00:22:01.752 { 00:22:01.752 "method": "sock_impl_set_options", 00:22:01.752 "params": { 00:22:01.752 "impl_name": "ssl", 00:22:01.752 "recv_buf_size": 4096, 00:22:01.752 "send_buf_size": 4096, 00:22:01.752 "enable_recv_pipe": true, 00:22:01.752 "enable_quickack": false, 00:22:01.752 "enable_placement_id": 0, 00:22:01.752 "enable_zerocopy_send_server": true, 00:22:01.752 "enable_zerocopy_send_client": false, 00:22:01.752 "zerocopy_threshold": 0, 00:22:01.752 "tls_version": 0, 00:22:01.752 "enable_ktls": false 00:22:01.752 } 00:22:01.752 }, 00:22:01.752 { 00:22:01.752 "method": "sock_impl_set_options", 00:22:01.752 "params": { 00:22:01.752 "impl_name": "posix", 00:22:01.752 "recv_buf_size": 2097152, 00:22:01.752 "send_buf_size": 2097152, 00:22:01.752 "enable_recv_pipe": true, 00:22:01.752 "enable_quickack": false, 00:22:01.752 "enable_placement_id": 0, 00:22:01.752 "enable_zerocopy_send_server": true, 00:22:01.752 "enable_zerocopy_send_client": false, 00:22:01.752 "zerocopy_threshold": 0, 00:22:01.752 "tls_version": 0, 00:22:01.752 "enable_ktls": false 00:22:01.752 } 00:22:01.752 }, 00:22:01.752 { 00:22:01.752 "method": "sock_impl_set_options", 00:22:01.752 "params": { 00:22:01.752 "impl_name": "uring", 00:22:01.752 "recv_buf_size": 2097152, 00:22:01.752 "send_buf_size": 2097152, 00:22:01.752 "enable_recv_pipe": true, 00:22:01.752 "enable_quickack": false, 00:22:01.752 "enable_placement_id": 0, 00:22:01.752 "enable_zerocopy_send_server": false, 00:22:01.752 "enable_zerocopy_send_client": false, 00:22:01.752 "zerocopy_threshold": 0, 00:22:01.752 "tls_version": 0, 00:22:01.753 "enable_ktls": false 00:22:01.753 } 00:22:01.753 } 00:22:01.753 ] 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "subsystem": "vmd", 00:22:01.753 "config": [] 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "subsystem": "accel", 00:22:01.753 "config": [ 00:22:01.753 { 00:22:01.753 "method": "accel_set_options", 00:22:01.753 "params": { 00:22:01.753 "small_cache_size": 128, 00:22:01.753 "large_cache_size": 16, 00:22:01.753 "task_count": 2048, 00:22:01.753 "sequence_count": 2048, 00:22:01.753 "buf_count": 2048 00:22:01.753 } 00:22:01.753 } 00:22:01.753 ] 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "subsystem": "bdev", 00:22:01.753 "config": [ 00:22:01.753 { 00:22:01.753 "method": "bdev_set_options", 00:22:01.753 "params": { 00:22:01.753 "bdev_io_pool_size": 65535, 00:22:01.753 "bdev_io_cache_size": 256, 00:22:01.753 "bdev_auto_examine": true, 00:22:01.753 "iobuf_small_cache_size": 128, 00:22:01.753 "iobuf_large_cache_size": 16 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "bdev_raid_set_options", 00:22:01.753 "params": { 00:22:01.753 "process_window_size_kb": 1024, 00:22:01.753 "process_max_bandwidth_mb_sec": 0 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "bdev_iscsi_set_options", 00:22:01.753 "params": { 00:22:01.753 "timeout_sec": 30 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "bdev_nvme_set_options", 00:22:01.753 "params": { 00:22:01.753 "action_on_timeout": "none", 00:22:01.753 "timeout_us": 0, 00:22:01.753 "timeout_admin_us": 0, 00:22:01.753 "keep_alive_timeout_ms": 10000, 00:22:01.753 "arbitration_burst": 0, 00:22:01.753 "low_priority_weight": 0, 00:22:01.753 "medium_priority_weight": 0, 00:22:01.753 "high_priority_weight": 0, 00:22:01.753 "nvme_adminq_poll_period_us": 10000, 00:22:01.753 "nvme_ioq_poll_period_us": 0, 00:22:01.753 "io_queue_requests": 512, 00:22:01.753 "delay_cmd_submit": true, 00:22:01.753 "transport_retry_count": 4, 00:22:01.753 "bdev_retry_count": 3, 00:22:01.753 "transport_ack_timeout": 0, 00:22:01.753 "ctrlr_loss_timeout_sec": 0, 00:22:01.753 "reconnect_delay_sec": 0, 00:22:01.753 "fast_io_fail_timeout_sec": 0, 00:22:01.753 "disable_auto_failback": false, 00:22:01.753 "generate_uuids": false, 00:22:01.753 "transport_tos": 0, 00:22:01.753 "nvme_error_stat": false, 00:22:01.753 "rdma_srq_size": 0, 00:22:01.753 "io_path_stat": false, 00:22:01.753 "allow_accel_sequence": false, 00:22:01.753 "rdma_max_cq_size": 0, 00:22:01.753 "rdma_cm_event_timeout_ms": 0, 00:22:01.753 "dhchap_digests": [ 00:22:01.753 "sha256", 00:22:01.753 "sha384", 00:22:01.753 "sha512" 00:22:01.753 ], 00:22:01.753 "dhchap_dhgroups": [ 00:22:01.753 "null", 00:22:01.753 "ffdhe2048", 00:22:01.753 "ffdhe3072", 00:22:01.753 "ffdhe4096", 00:22:01.753 "ffdhe6144", 00:22:01.753 "ffdhe8192" 00:22:01.753 ], 00:22:01.753 "rdma_umr_per_io": false 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "bdev_nvme_attach_controller", 00:22:01.753 "params": { 00:22:01.753 "name": "nvme0", 00:22:01.753 "trtype": "TCP", 00:22:01.753 "adrfam": "IPv4", 00:22:01.753 "traddr": "127.0.0.1", 00:22:01.753 "trsvcid": "4420", 00:22:01.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.753 "prchk_reftag": false, 00:22:01.753 "prchk_guard": false, 00:22:01.753 "ctrlr_loss_timeout_sec": 0, 00:22:01.753 "reconnect_delay_sec": 0, 00:22:01.753 "fast_io_fail_timeout_sec": 0, 00:22:01.753 "psk": "key0", 00:22:01.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:01.753 "hdgst": false, 00:22:01.753 "ddgst": false, 00:22:01.753 "multipath": "multipath" 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "bdev_nvme_set_hotplug", 00:22:01.753 "params": { 00:22:01.753 "period_us": 100000, 00:22:01.753 "enable": false 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "bdev_wait_for_examine" 00:22:01.753 } 00:22:01.753 ] 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "subsystem": "nbd", 00:22:01.753 "config": [] 00:22:01.753 } 00:22:01.753 ] 00:22:01.753 }' 00:22:01.753 21:49:02 keyring_file -- keyring/file.sh@115 -- # killprocess 85366 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85366 ']' 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85366 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85366 00:22:01.753 killing process with pid 85366 00:22:01.753 Received shutdown signal, test time was about 1.000000 seconds 00:22:01.753 00:22:01.753 Latency(us) 00:22:01.753 [2024-12-10T21:49:02.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.753 [2024-12-10T21:49:02.536Z] =================================================================================================================== 00:22:01.753 [2024-12-10T21:49:02.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85366' 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@973 -- # kill 85366 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@978 -- # wait 85366 00:22:01.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.753 21:49:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=85626 00:22:01.753 21:49:02 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85626 /var/tmp/bperf.sock 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85626 ']' 00:22:01.753 21:49:02 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.753 21:49:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.753 21:49:02 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:22:01.753 "subsystems": [ 00:22:01.753 { 00:22:01.753 "subsystem": "keyring", 00:22:01.753 "config": [ 00:22:01.753 { 00:22:01.753 "method": "keyring_file_add_key", 00:22:01.753 "params": { 00:22:01.753 "name": "key0", 00:22:01.753 "path": "/tmp/tmp.sUOydWKXkL" 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "keyring_file_add_key", 00:22:01.753 "params": { 00:22:01.753 "name": "key1", 00:22:01.753 "path": "/tmp/tmp.9P0rrb0b7B" 00:22:01.753 } 00:22:01.753 } 00:22:01.753 ] 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "subsystem": "iobuf", 00:22:01.753 "config": [ 00:22:01.753 { 00:22:01.753 "method": "iobuf_set_options", 00:22:01.753 "params": { 00:22:01.753 "small_pool_count": 8192, 00:22:01.753 "large_pool_count": 1024, 00:22:01.753 "small_bufsize": 8192, 00:22:01.753 "large_bufsize": 135168, 00:22:01.753 "enable_numa": false 00:22:01.753 } 00:22:01.753 } 00:22:01.753 ] 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "subsystem": "sock", 00:22:01.753 "config": [ 00:22:01.753 { 00:22:01.753 "method": "sock_set_default_impl", 00:22:01.753 "params": { 00:22:01.753 "impl_name": "uring" 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "sock_impl_set_options", 00:22:01.753 "params": { 00:22:01.753 "impl_name": "ssl", 00:22:01.753 "recv_buf_size": 4096, 00:22:01.753 "send_buf_size": 4096, 00:22:01.753 "enable_recv_pipe": true, 00:22:01.753 "enable_quickack": false, 00:22:01.753 "enable_placement_id": 0, 00:22:01.753 "enable_zerocopy_send_server": true, 00:22:01.753 "enable_zerocopy_send_client": false, 00:22:01.753 "zerocopy_threshold": 0, 00:22:01.753 "tls_version": 0, 00:22:01.753 "enable_ktls": false 00:22:01.753 } 00:22:01.753 }, 00:22:01.753 { 00:22:01.753 "method": "sock_impl_set_options", 00:22:01.753 "params": { 00:22:01.753 "impl_name": "posix", 00:22:01.753 "recv_buf_size": 2097152, 00:22:01.753 "send_buf_size": 2097152, 00:22:01.753 "enable_recv_pipe": true, 00:22:01.753 "enable_quickack": false, 00:22:01.753 "enable_placement_id": 0, 00:22:01.754 "enable_zerocopy_send_server": true, 00:22:01.754 "enable_zerocopy_send_client": false, 00:22:01.754 "zerocopy_threshold": 0, 00:22:01.754 "tls_version": 0, 00:22:01.754 "enable_ktls": false 00:22:01.754 } 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "method": "sock_impl_set_options", 00:22:01.754 "params": { 00:22:01.754 "impl_name": "uring", 00:22:01.754 "recv_buf_size": 2097152, 00:22:01.754 "send_buf_size": 2097152, 00:22:01.754 "enable_recv_pipe": true, 00:22:01.754 "enable_quickack": false, 00:22:01.754 "enable_placement_id": 0, 00:22:01.754 "enable_zerocopy_send_server": false, 00:22:01.754 "enable_zerocopy_send_client": false, 00:22:01.754 "zerocopy_threshold": 0, 00:22:01.754 "tls_version": 0, 00:22:01.754 "enable_ktls": false 00:22:01.754 } 00:22:01.754 } 00:22:01.754 ] 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "subsystem": "vmd", 00:22:01.754 "config": [] 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "subsystem": "accel", 00:22:01.754 "config": [ 00:22:01.754 { 00:22:01.754 "method": "accel_set_options", 00:22:01.754 "params": { 00:22:01.754 "small_cache_size": 128, 00:22:01.754 "large_cache_size": 16, 00:22:01.754 "task_count": 2048, 00:22:01.754 "sequence_count": 2048, 00:22:01.754 "buf_count": 2048 00:22:01.754 } 00:22:01.754 } 00:22:01.754 ] 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "subsystem": "bdev", 00:22:01.754 "config": [ 00:22:01.754 { 00:22:01.754 "method": "bdev_set_options", 00:22:01.754 "params": { 00:22:01.754 "bdev_io_pool_size": 65535, 00:22:01.754 "bdev_io_cache_size": 256, 00:22:01.754 "bdev_auto_examine": true, 00:22:01.754 "iobuf_small_cache_size": 128, 00:22:01.754 "iobuf_large_cache_size": 16 00:22:01.754 } 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "method": "bdev_raid_set_options", 00:22:01.754 "params": { 00:22:01.754 "process_window_size_kb": 1024, 00:22:01.754 "process_max_bandwidth_mb_sec": 0 00:22:01.754 } 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "method": "bdev_iscsi_set_options", 00:22:01.754 "params": { 00:22:01.754 "timeout_sec": 30 00:22:01.754 } 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "method": "bdev_nvme_set_options", 00:22:01.754 "params": { 00:22:01.754 "action_on_timeout": "none", 00:22:01.754 "timeout_us": 0, 00:22:01.754 "timeout_admin_us": 0, 00:22:01.754 "keep_alive_timeout_ms": 10000, 00:22:01.754 "arbitration_burst": 0, 00:22:01.754 "low_priority_weight": 0, 00:22:01.754 "medium_priority_weight": 0, 00:22:01.754 "high_priority_weight": 0, 00:22:01.754 "nvme_adminq_poll_period_us": 10000, 00:22:01.754 "nvme_ioq_poll_period_us": 0, 00:22:01.754 "io_queue_requests": 512, 00:22:01.754 "delay_cmd_submit": true, 00:22:01.754 "transport_retry_count": 4, 00:22:01.754 "bdev_retry_count": 3, 00:22:01.754 "transport_ack_timeout": 0, 00:22:01.754 "ctrlr_loss_timeout_sec": 0, 00:22:01.754 "reconnect_delay_sec": 0, 00:22:01.754 "fast_io_fail_timeout_sec": 0, 00:22:01.754 "disable_auto_failback": false, 00:22:01.754 "generate_uuids": false, 00:22:01.754 "transport_tos": 0, 00:22:01.754 "nvme_error_stat": false, 00:22:01.754 "rdma_srq_size": 0, 00:22:01.754 "io_path_stat": false, 00:22:01.754 "allow_accel_sequence": false, 00:22:01.754 "rdma_max_cq_size": 0, 00:22:01.754 "rdma_cm_event_timeout_ms": 0, 00:22:01.754 "dhchap_digests": [ 00:22:01.754 "sha256", 00:22:01.754 "sha384", 00:22:01.754 "sha512" 00:22:01.754 ], 00:22:01.754 "dhchap_dhgroups": [ 00:22:01.754 "null", 00:22:01.754 "ffdhe2048", 00:22:01.754 "ffdhe3072", 00:22:01.754 "ffdhe4096", 00:22:01.754 "ffdhe6144", 00:22:01.754 "ffdhe8192" 00:22:01.754 ], 00:22:01.754 "rdma_umr_per_io": false 00:22:01.754 } 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "method": "bdev_nvme_attach_controller", 00:22:01.754 "params": { 00:22:01.754 "name": "nvme0", 00:22:01.754 "trtype": "TCP", 00:22:01.754 "adrfam": "IPv4", 00:22:01.754 "traddr": "127.0.0.1", 00:22:01.754 "trsvcid": "4420", 00:22:01.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.754 "prchk_reftag": false, 00:22:01.754 "prchk_guard": false, 00:22:01.754 "ctrlr_loss_timeout_sec": 0, 00:22:01.754 "reconnect_delay_sec": 0, 00:22:01.754 "fast_io_fail_timeout_sec": 0, 00:22:01.754 "psk": "key0", 00:22:01.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:01.754 "hdgst": false, 00:22:01.754 "ddgst": false, 00:22:01.754 "multipath": "multipath" 00:22:01.754 } 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "method": "bdev_nvme_set_hotplug", 00:22:01.754 "params": { 00:22:01.754 "period_us": 100000, 00:22:01.754 "enable": false 00:22:01.754 } 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "method": "bdev_wait_for_examine" 00:22:01.754 } 00:22:01.754 ] 00:22:01.754 }, 00:22:01.754 { 00:22:01.754 "subsystem": "nbd", 00:22:01.754 "config": [] 00:22:01.754 } 00:22:01.754 ] 00:22:01.754 }' 00:22:01.754 21:49:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:01.754 [2024-12-10 21:49:02.457235] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:22:01.754 [2024-12-10 21:49:02.457323] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85626 ] 00:22:02.012 [2024-12-10 21:49:02.595064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.012 [2024-12-10 21:49:02.628632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.012 [2024-12-10 21:49:02.740412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:02.012 [2024-12-10 21:49:02.781790] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.948 21:49:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.948 21:49:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:02.948 21:49:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:22:02.948 21:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:02.948 21:49:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:22:03.206 21:49:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:03.206 21:49:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:22:03.206 21:49:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:03.206 21:49:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:03.206 21:49:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.206 21:49:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:03.206 21:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.773 21:49:04 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:22:03.773 21:49:04 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:22:03.773 21:49:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:03.773 21:49:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:03.773 21:49:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.773 21:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.773 21:49:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:04.031 21:49:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:22:04.031 21:49:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:22:04.031 21:49:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:22:04.031 21:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:04.289 21:49:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:22:04.289 21:49:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:04.289 21:49:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sUOydWKXkL /tmp/tmp.9P0rrb0b7B 00:22:04.289 21:49:04 keyring_file -- keyring/file.sh@20 -- # killprocess 85626 00:22:04.289 21:49:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85626 ']' 00:22:04.289 21:49:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85626 00:22:04.289 21:49:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:04.289 21:49:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.289 21:49:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85626 00:22:04.289 killing process with pid 85626 00:22:04.290 Received shutdown signal, test time was about 1.000000 seconds 00:22:04.290 00:22:04.290 Latency(us) 00:22:04.290 [2024-12-10T21:49:05.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.290 [2024-12-10T21:49:05.073Z] =================================================================================================================== 00:22:04.290 [2024-12-10T21:49:05.073Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.290 21:49:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:04.290 21:49:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:04.290 21:49:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85626' 00:22:04.290 21:49:04 keyring_file -- common/autotest_common.sh@973 -- # kill 85626 00:22:04.290 21:49:04 keyring_file -- common/autotest_common.sh@978 -- # wait 85626 00:22:04.290 21:49:05 keyring_file -- keyring/file.sh@21 -- # killprocess 85361 00:22:04.290 21:49:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85361 ']' 00:22:04.290 21:49:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85361 00:22:04.290 21:49:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:04.290 21:49:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.290 21:49:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85361 00:22:04.546 killing process with pid 85361 00:22:04.546 21:49:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.546 21:49:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.546 21:49:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85361' 00:22:04.546 21:49:05 keyring_file -- common/autotest_common.sh@973 -- # kill 85361 00:22:04.546 21:49:05 keyring_file -- common/autotest_common.sh@978 -- # wait 85361 00:22:04.804 00:22:04.804 real 0m16.479s 00:22:04.804 user 0m43.423s 00:22:04.804 sys 0m2.873s 00:22:04.804 21:49:05 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.804 21:49:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:04.804 ************************************ 00:22:04.804 END TEST keyring_file 00:22:04.804 ************************************ 00:22:04.804 21:49:05 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:22:04.804 21:49:05 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:04.804 21:49:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.804 21:49:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.804 21:49:05 -- common/autotest_common.sh@10 -- # set +x 00:22:04.804 ************************************ 00:22:04.804 START TEST keyring_linux 00:22:04.804 ************************************ 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:04.804 Joined session keyring: 230504339 00:22:04.804 * Looking for test storage... 00:22:04.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.804 --rc genhtml_branch_coverage=1 00:22:04.804 --rc genhtml_function_coverage=1 00:22:04.804 --rc genhtml_legend=1 00:22:04.804 --rc geninfo_all_blocks=1 00:22:04.804 --rc geninfo_unexecuted_blocks=1 00:22:04.804 00:22:04.804 ' 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.804 --rc genhtml_branch_coverage=1 00:22:04.804 --rc genhtml_function_coverage=1 00:22:04.804 --rc genhtml_legend=1 00:22:04.804 --rc geninfo_all_blocks=1 00:22:04.804 --rc geninfo_unexecuted_blocks=1 00:22:04.804 00:22:04.804 ' 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.804 --rc genhtml_branch_coverage=1 00:22:04.804 --rc genhtml_function_coverage=1 00:22:04.804 --rc genhtml_legend=1 00:22:04.804 --rc geninfo_all_blocks=1 00:22:04.804 --rc geninfo_unexecuted_blocks=1 00:22:04.804 00:22:04.804 ' 00:22:04.804 21:49:05 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.804 --rc genhtml_branch_coverage=1 00:22:04.804 --rc genhtml_function_coverage=1 00:22:04.804 --rc genhtml_legend=1 00:22:04.804 --rc geninfo_all_blocks=1 00:22:04.804 --rc geninfo_unexecuted_blocks=1 00:22:04.804 00:22:04.804 ' 00:22:04.804 21:49:05 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:04.804 21:49:05 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=3cdb6c65-29d7-4335-9fe2-a5e5f70b9a1c 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.804 21:49:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.804 21:49:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.804 21:49:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.804 21:49:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.804 21:49:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:04.804 21:49:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.804 21:49:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.804 21:49:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:04.804 21:49:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:04.804 21:49:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:04.804 21:49:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:04.804 21:49:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:04.804 21:49:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:04.804 21:49:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:04.804 21:49:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:04.804 21:49:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:04.804 21:49:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:04.804 21:49:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:04.805 21:49:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:04.805 21:49:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:04.805 21:49:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:04.805 21:49:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:04.805 21:49:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:04.805 21:49:05 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:04.805 21:49:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:04.805 21:49:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:05.118 /tmp/:spdk-test:key0 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:05.118 21:49:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:05.118 21:49:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:05.118 21:49:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:05.118 21:49:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:05.118 21:49:05 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:05.118 21:49:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:05.118 21:49:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:05.118 /tmp/:spdk-test:key1 00:22:05.118 21:49:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:05.118 21:49:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85752 00:22:05.118 21:49:05 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:05.118 21:49:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85752 00:22:05.118 21:49:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85752 ']' 00:22:05.118 21:49:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.118 21:49:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.118 21:49:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.118 21:49:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.118 21:49:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:05.118 [2024-12-10 21:49:05.715089] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:22:05.118 [2024-12-10 21:49:05.715207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85752 ] 00:22:05.376 [2024-12-10 21:49:05.914582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.376 [2024-12-10 21:49:05.962731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.376 [2024-12-10 21:49:06.005836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:05.941 21:49:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.941 21:49:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:05.941 21:49:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:05.941 21:49:06 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.941 21:49:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:05.941 [2024-12-10 21:49:06.720968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.198 null0 00:22:06.198 [2024-12-10 21:49:06.752931] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.198 [2024-12-10 21:49:06.753111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:06.198 21:49:06 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.198 21:49:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:06.198 782323823 00:22:06.198 21:49:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:06.198 594032179 00:22:06.198 21:49:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85766 00:22:06.198 21:49:06 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:06.198 21:49:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85766 /var/tmp/bperf.sock 00:22:06.198 21:49:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85766 ']' 00:22:06.198 21:49:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:06.198 21:49:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.198 21:49:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:06.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:06.198 21:49:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.198 21:49:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 [2024-12-10 21:49:06.847074] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization... 00:22:06.198 [2024-12-10 21:49:06.847201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85766 ] 00:22:06.454 [2024-12-10 21:49:07.001468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.454 [2024-12-10 21:49:07.035634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.454 21:49:07 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.454 21:49:07 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:06.454 21:49:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:06.454 21:49:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:06.712 21:49:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:06.712 21:49:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:06.970 [2024-12-10 21:49:07.653814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:06.970 21:49:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:06.970 21:49:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:07.228 [2024-12-10 21:49:07.973856] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.486 nvme0n1 00:22:07.486 21:49:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:07.486 21:49:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:07.486 21:49:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:07.486 21:49:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:07.486 21:49:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:07.486 21:49:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.744 21:49:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:07.744 21:49:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:07.744 21:49:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:07.744 21:49:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:07.744 21:49:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:07.744 21:49:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:07.744 21:49:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:08.001 21:49:08 keyring_linux -- keyring/linux.sh@25 -- # sn=782323823 00:22:08.001 21:49:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:08.001 21:49:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:08.001 21:49:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 782323823 == \7\8\2\3\2\3\8\2\3 ]] 00:22:08.001 21:49:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 782323823 00:22:08.001 21:49:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:08.001 21:49:08 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:08.259 Running I/O for 1 seconds... 00:22:09.193 13069.00 IOPS, 51.05 MiB/s 00:22:09.193 Latency(us) 00:22:09.193 [2024-12-10T21:49:09.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.193 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:09.193 nvme0n1 : 1.01 13077.08 51.08 0.00 0.00 9737.84 7447.27 17635.14 00:22:09.193 [2024-12-10T21:49:09.976Z] =================================================================================================================== 00:22:09.193 [2024-12-10T21:49:09.976Z] Total : 13077.08 51.08 0.00 0.00 9737.84 7447.27 17635.14 00:22:09.193 { 00:22:09.193 "results": [ 00:22:09.193 { 00:22:09.193 "job": "nvme0n1", 00:22:09.193 "core_mask": "0x2", 00:22:09.193 "workload": "randread", 00:22:09.193 "status": "finished", 00:22:09.193 "queue_depth": 128, 00:22:09.193 "io_size": 4096, 00:22:09.193 "runtime": 1.009247, 00:22:09.193 "iops": 13077.076275678799, 00:22:09.193 "mibps": 51.08232920187031, 00:22:09.193 "io_failed": 0, 00:22:09.193 "io_timeout": 0, 00:22:09.193 "avg_latency_us": 9737.842328727493, 00:22:09.193 "min_latency_us": 7447.272727272727, 00:22:09.193 "max_latency_us": 17635.14181818182 00:22:09.193 } 00:22:09.193 ], 00:22:09.194 "core_count": 1 00:22:09.194 } 00:22:09.194 21:49:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:09.194 21:49:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:09.760 21:49:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:09.760 21:49:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:09.760 21:49:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:09.760 21:49:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:09.760 21:49:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.760 21:49:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:10.018 21:49:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:10.018 21:49:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:10.018 21:49:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:10.018 21:49:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:10.018 21:49:10 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:22:10.018 21:49:10 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:10.018 21:49:10 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:10.018 21:49:10 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.018 21:49:10 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:10.018 21:49:10 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.018 21:49:10 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:10.018 21:49:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:10.277 [2024-12-10 21:49:10.838456] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:10.277 [2024-12-10 21:49:10.839085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1426b90 (107): Transport endpoint is not connected 00:22:10.277 [2024-12-10 21:49:10.840074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1426b90 (9): Bad file descriptor 00:22:10.277 [2024-12-10 21:49:10.841070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:10.277 [2024-12-10 21:49:10.841091] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:10.277 [2024-12-10 21:49:10.841101] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:10.277 [2024-12-10 21:49:10.841112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:10.277 request: 00:22:10.277 { 00:22:10.277 "name": "nvme0", 00:22:10.277 "trtype": "tcp", 00:22:10.277 "traddr": "127.0.0.1", 00:22:10.277 "adrfam": "ipv4", 00:22:10.277 "trsvcid": "4420", 00:22:10.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:10.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:10.277 "prchk_reftag": false, 00:22:10.277 "prchk_guard": false, 00:22:10.277 "hdgst": false, 00:22:10.277 "ddgst": false, 00:22:10.277 "psk": ":spdk-test:key1", 00:22:10.277 "allow_unrecognized_csi": false, 00:22:10.277 "method": "bdev_nvme_attach_controller", 00:22:10.277 "req_id": 1 00:22:10.277 } 00:22:10.277 Got JSON-RPC error response 00:22:10.277 response: 00:22:10.277 { 00:22:10.277 "code": -5, 00:22:10.277 "message": "Input/output error" 00:22:10.277 } 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@33 -- # sn=782323823 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 782323823 00:22:10.277 1 links removed 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@33 -- # sn=594032179 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 594032179 00:22:10.277 1 links removed 00:22:10.277 21:49:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85766 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85766 ']' 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85766 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85766 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85766' 00:22:10.277 killing process with pid 85766 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@973 -- # kill 85766 00:22:10.277 Received shutdown signal, test time was about 1.000000 seconds 00:22:10.277 00:22:10.277 Latency(us) 00:22:10.277 [2024-12-10T21:49:11.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.277 [2024-12-10T21:49:11.060Z] =================================================================================================================== 00:22:10.277 [2024-12-10T21:49:11.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.277 21:49:10 keyring_linux -- common/autotest_common.sh@978 -- # wait 85766 00:22:10.277 21:49:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85752 00:22:10.277 21:49:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85752 ']' 00:22:10.277 21:49:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85752 00:22:10.277 21:49:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:10.277 21:49:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.277 21:49:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85752 00:22:10.535 21:49:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.535 21:49:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.535 21:49:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85752' 00:22:10.535 killing process with pid 85752 00:22:10.535 21:49:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 85752 00:22:10.535 21:49:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 85752 00:22:10.794 00:22:10.794 real 0m5.948s 00:22:10.794 user 0m12.105s 00:22:10.794 sys 0m1.366s 00:22:10.794 21:49:11 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.794 ************************************ 00:22:10.794 END TEST keyring_linux 00:22:10.794 ************************************ 00:22:10.794 21:49:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:10.794 21:49:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:10.794 21:49:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:10.794 21:49:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:10.794 21:49:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:10.794 21:49:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:10.794 21:49:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:10.794 21:49:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:10.794 21:49:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.794 21:49:11 -- common/autotest_common.sh@10 -- # set +x 00:22:10.794 21:49:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:10.794 21:49:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:10.794 21:49:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:10.794 21:49:11 -- common/autotest_common.sh@10 -- # set +x 00:22:12.695 INFO: APP EXITING 00:22:12.695 INFO: killing all VMs 00:22:12.695 INFO: killing vhost app 00:22:12.695 INFO: EXIT DONE 00:22:12.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:12.954 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:12.954 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:13.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:13.889 Cleaning 00:22:13.889 Removing: /var/run/dpdk/spdk0/config 00:22:13.889 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:13.889 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:13.889 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:13.889 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:13.889 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:13.889 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:13.889 Removing: /var/run/dpdk/spdk1/config 00:22:13.889 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:13.889 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:13.889 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:13.889 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:13.889 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:13.889 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:13.889 Removing: /var/run/dpdk/spdk2/config 00:22:13.889 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:13.889 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:13.889 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:13.889 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:13.889 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:13.889 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:13.889 Removing: /var/run/dpdk/spdk3/config 00:22:13.889 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:13.889 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:13.889 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:13.889 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:13.889 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:13.889 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:13.889 Removing: /var/run/dpdk/spdk4/config 00:22:13.889 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:13.889 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:13.889 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:13.889 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:13.889 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:13.889 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:13.889 Removing: /dev/shm/nvmf_trace.0 00:22:13.889 Removing: /dev/shm/spdk_tgt_trace.pid56931 00:22:13.889 Removing: /var/run/dpdk/spdk0 00:22:13.889 Removing: /var/run/dpdk/spdk1 00:22:13.889 Removing: /var/run/dpdk/spdk2 00:22:13.889 Removing: /var/run/dpdk/spdk3 00:22:13.889 Removing: /var/run/dpdk/spdk4 00:22:13.889 Removing: /var/run/dpdk/spdk_pid56778 00:22:13.889 Removing: /var/run/dpdk/spdk_pid56931 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57124 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57205 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57225 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57335 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57353 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57492 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57693 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57847 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57920 00:22:13.889 Removing: /var/run/dpdk/spdk_pid57996 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58088 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58160 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58193 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58227 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58298 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58390 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58839 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58879 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58922 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58925 00:22:13.889 Removing: /var/run/dpdk/spdk_pid58992 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59001 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59055 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59063 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59109 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59119 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59159 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59170 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59293 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59328 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59411 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59737 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59755 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59786 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59799 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59815 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59834 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59853 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59863 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59882 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59901 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59911 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59941 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59949 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59972 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59991 00:22:13.889 Removing: /var/run/dpdk/spdk_pid59999 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60020 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60039 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60047 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60068 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60093 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60112 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60136 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60208 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60237 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60246 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60275 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60286 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60294 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60336 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60344 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60377 00:22:13.889 Removing: /var/run/dpdk/spdk_pid60382 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60392 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60401 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60411 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60419 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60424 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60434 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60462 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60489 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60498 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60528 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60536 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60544 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60580 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60591 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60618 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60625 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60633 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60640 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60648 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60655 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60663 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60665 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60747 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60800 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60917 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60948 00:22:14.148 Removing: /var/run/dpdk/spdk_pid60991 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61000 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61022 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61037 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61068 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61089 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61167 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61185 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61235 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61310 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61372 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61400 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61500 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61542 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61580 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61801 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61899 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61927 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61953 00:22:14.148 Removing: /var/run/dpdk/spdk_pid61990 00:22:14.148 Removing: /var/run/dpdk/spdk_pid62024 00:22:14.148 Removing: /var/run/dpdk/spdk_pid62059 00:22:14.148 Removing: /var/run/dpdk/spdk_pid62089 00:22:14.148 Removing: /var/run/dpdk/spdk_pid62477 00:22:14.148 Removing: /var/run/dpdk/spdk_pid62517 00:22:14.148 Removing: /var/run/dpdk/spdk_pid62859 00:22:14.148 Removing: /var/run/dpdk/spdk_pid63329 00:22:14.148 Removing: /var/run/dpdk/spdk_pid63615 00:22:14.148 Removing: /var/run/dpdk/spdk_pid64455 00:22:14.148 Removing: /var/run/dpdk/spdk_pid65376 00:22:14.148 Removing: /var/run/dpdk/spdk_pid65498 00:22:14.148 Removing: /var/run/dpdk/spdk_pid65561 00:22:14.148 Removing: /var/run/dpdk/spdk_pid66977 00:22:14.148 Removing: /var/run/dpdk/spdk_pid67278 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71220 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71578 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71686 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71820 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71845 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71866 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71883 00:22:14.148 Removing: /var/run/dpdk/spdk_pid71973 00:22:14.148 Removing: /var/run/dpdk/spdk_pid72104 00:22:14.148 Removing: /var/run/dpdk/spdk_pid72250 00:22:14.148 Removing: /var/run/dpdk/spdk_pid72324 00:22:14.148 Removing: /var/run/dpdk/spdk_pid72513 00:22:14.148 Removing: /var/run/dpdk/spdk_pid72576 00:22:14.148 Removing: /var/run/dpdk/spdk_pid72661 00:22:14.148 Removing: /var/run/dpdk/spdk_pid73000 00:22:14.148 Removing: /var/run/dpdk/spdk_pid73402 00:22:14.148 Removing: /var/run/dpdk/spdk_pid73403 00:22:14.148 Removing: /var/run/dpdk/spdk_pid73404 00:22:14.148 Removing: /var/run/dpdk/spdk_pid73667 00:22:14.148 Removing: /var/run/dpdk/spdk_pid73923 00:22:14.148 Removing: /var/run/dpdk/spdk_pid74303 00:22:14.148 Removing: /var/run/dpdk/spdk_pid74309 00:22:14.148 Removing: /var/run/dpdk/spdk_pid74628 00:22:14.148 Removing: /var/run/dpdk/spdk_pid74646 00:22:14.148 Removing: /var/run/dpdk/spdk_pid74661 00:22:14.148 Removing: /var/run/dpdk/spdk_pid74692 00:22:14.148 Removing: /var/run/dpdk/spdk_pid74697 00:22:14.148 Removing: /var/run/dpdk/spdk_pid75057 00:22:14.148 Removing: /var/run/dpdk/spdk_pid75100 00:22:14.148 Removing: /var/run/dpdk/spdk_pid75432 00:22:14.148 Removing: /var/run/dpdk/spdk_pid75622 00:22:14.148 Removing: /var/run/dpdk/spdk_pid76050 00:22:14.148 Removing: /var/run/dpdk/spdk_pid76588 00:22:14.148 Removing: /var/run/dpdk/spdk_pid77494 00:22:14.148 Removing: /var/run/dpdk/spdk_pid78121 00:22:14.148 Removing: /var/run/dpdk/spdk_pid78123 00:22:14.148 Removing: /var/run/dpdk/spdk_pid80161 00:22:14.148 Removing: /var/run/dpdk/spdk_pid80214 00:22:14.407 Removing: /var/run/dpdk/spdk_pid80267 00:22:14.407 Removing: /var/run/dpdk/spdk_pid80321 00:22:14.407 Removing: /var/run/dpdk/spdk_pid80421 00:22:14.407 Removing: /var/run/dpdk/spdk_pid80480 00:22:14.407 Removing: /var/run/dpdk/spdk_pid80533 00:22:14.407 Removing: /var/run/dpdk/spdk_pid80587 00:22:14.407 Removing: /var/run/dpdk/spdk_pid80952 00:22:14.407 Removing: /var/run/dpdk/spdk_pid82156 00:22:14.407 Removing: /var/run/dpdk/spdk_pid82295 00:22:14.408 Removing: /var/run/dpdk/spdk_pid82530 00:22:14.408 Removing: /var/run/dpdk/spdk_pid83116 00:22:14.408 Removing: /var/run/dpdk/spdk_pid83276 00:22:14.408 Removing: /var/run/dpdk/spdk_pid83429 00:22:14.408 Removing: /var/run/dpdk/spdk_pid83521 00:22:14.408 Removing: /var/run/dpdk/spdk_pid83681 00:22:14.408 Removing: /var/run/dpdk/spdk_pid83790 00:22:14.408 Removing: /var/run/dpdk/spdk_pid84499 00:22:14.408 Removing: /var/run/dpdk/spdk_pid84534 00:22:14.408 Removing: /var/run/dpdk/spdk_pid84574 00:22:14.408 Removing: /var/run/dpdk/spdk_pid84821 00:22:14.408 Removing: /var/run/dpdk/spdk_pid84857 00:22:14.408 Removing: /var/run/dpdk/spdk_pid84893 00:22:14.408 Removing: /var/run/dpdk/spdk_pid85361 00:22:14.408 Removing: /var/run/dpdk/spdk_pid85366 00:22:14.408 Removing: /var/run/dpdk/spdk_pid85626 00:22:14.408 Removing: /var/run/dpdk/spdk_pid85752 00:22:14.408 Removing: /var/run/dpdk/spdk_pid85766 00:22:14.408 Clean 00:22:14.408 21:49:15 -- common/autotest_common.sh@1453 -- # return 0 00:22:14.408 21:49:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:14.408 21:49:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.408 21:49:15 -- common/autotest_common.sh@10 -- # set +x 00:22:14.408 21:49:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:14.408 21:49:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.408 21:49:15 -- common/autotest_common.sh@10 -- # set +x 00:22:14.408 21:49:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:14.408 21:49:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:14.408 21:49:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:14.408 21:49:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:14.408 21:49:15 -- spdk/autotest.sh@398 -- # hostname 00:22:14.408 21:49:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:14.669 geninfo: WARNING: invalid characters removed from testname! 00:22:46.735 21:49:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:46.735 21:49:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:49.276 21:49:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:51.803 21:49:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:55.089 21:49:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:57.621 21:49:58 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:00.160 21:50:00 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:00.160 21:50:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:00.160 21:50:00 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:00.160 21:50:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:00.160 21:50:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:00.160 21:50:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:00.438 + [[ -n 5266 ]] 00:23:00.438 + sudo kill 5266 00:23:00.447 [Pipeline] } 00:23:00.462 [Pipeline] // timeout 00:23:00.468 [Pipeline] } 00:23:00.482 [Pipeline] // stage 00:23:00.487 [Pipeline] } 00:23:00.499 [Pipeline] // catchError 00:23:00.509 [Pipeline] stage 00:23:00.511 [Pipeline] { (Stop VM) 00:23:00.525 [Pipeline] sh 00:23:00.802 + vagrant halt 00:23:04.987 ==> default: Halting domain... 00:23:10.268 [Pipeline] sh 00:23:10.547 + vagrant destroy -f 00:23:14.730 ==> default: Removing domain... 00:23:14.742 [Pipeline] sh 00:23:15.020 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:15.029 [Pipeline] } 00:23:15.043 [Pipeline] // stage 00:23:15.048 [Pipeline] } 00:23:15.062 [Pipeline] // dir 00:23:15.068 [Pipeline] } 00:23:15.082 [Pipeline] // wrap 00:23:15.088 [Pipeline] } 00:23:15.100 [Pipeline] // catchError 00:23:15.111 [Pipeline] stage 00:23:15.113 [Pipeline] { (Epilogue) 00:23:15.124 [Pipeline] sh 00:23:15.400 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:23.524 [Pipeline] catchError 00:23:23.526 [Pipeline] { 00:23:23.539 [Pipeline] sh 00:23:23.819 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:24.078 Artifacts sizes are good 00:23:24.086 [Pipeline] } 00:23:24.100 [Pipeline] // catchError 00:23:24.111 [Pipeline] archiveArtifacts 00:23:24.118 Archiving artifacts 00:23:24.237 [Pipeline] cleanWs 00:23:24.249 [WS-CLEANUP] Deleting project workspace... 00:23:24.249 [WS-CLEANUP] Deferred wipeout is used... 00:23:24.256 [WS-CLEANUP] done 00:23:24.258 [Pipeline] } 00:23:24.274 [Pipeline] // stage 00:23:24.281 [Pipeline] } 00:23:24.296 [Pipeline] // node 00:23:24.301 [Pipeline] End of Pipeline 00:23:24.343 Finished: SUCCESS